Skip to content
Snippets Groups Projects
  1. Jan 22, 2014
  2. Jan 10, 2014
  3. Jan 08, 2014
  4. Dec 30, 2013
  5. Dec 24, 2013
    • Avi Kivity's avatar
      bsd: convert the Xen stuff to C++ · 828ec291
      Avi Kivity authored
      
      Helps making bsd header changes that xen includes.
      
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      828ec291
    • Nadav Har'El's avatar
      sched: Overhaul sched::thread::attr construction · eb48b150
      Nadav Har'El authored
      
      We use sched::thread::attr to pass parameters to sched::thread creation,
      i.e., create a thread with non-default stack parameters, pinned to a
      particular CPU, or a detached thread.
      
      Previously we had constructors taking many combinations of stack size
      (integer), pinned cpu (cpu*) and detached (boolean), and doing "the
      right thing". However, this makes the code hard to read (what does
      attr(4096) specify?) and the constructors hard to expand with new
      parameters.
      
      Replace the attr() constructors with the so-called "named parameter"
      idiom: attr now only has a null constructor attr(), and one modifies
      it with calls to pin(cpu*), detach(), or stack(size).
      
      For example,
          attr()                                  // default attributes
          attr().pin(sched::cpus[0])              // pin to cpu 0
          attr().stack(4096).pin(sched::cpus[0])  // pin and non-default stack
          and so on.
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      eb48b150
  6. Dec 16, 2013
  7. Dec 15, 2013
    • Glauber Costa's avatar
      enable interrupts during page fault handling · ec7ed8cd
      Glauber Costa authored
      
      Context: going to wait with irqs_disabled is a call for disaster.  While it is
      true that not every time we call wait we actually end up waiting, that should
      be an invalid call, due to the times we may wait. Because of that, it would
      be good to express that nonsense in an assertion.
      
      There is however, places we sleep with irqs disabled currently. Although they
      are technically safe, because we implicitly enable interrupts, they end up
      reaching wait() in a non-safe state. That happens in the page fault handler.
      Explicitly enabling interrupts will allow us to test for valid / invalid wait
      status.
      
      With this test applied, all tests in our whitelist still passes.
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      ec7ed8cd
  8. Dec 11, 2013
    • Pekka Enberg's avatar
      x64: Make page fault handler arch specific · 43491705
      Pekka Enberg authored
      
      Simplify core/mmu.cc and make it more portable by moving the page fault
      handler to arch/x64/mmu.cc.  There's more arch specific code in
      core/mmu.cc that should be also moved.
      
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      43491705
    • Nadav Har'El's avatar
      Verify slow page fault only happens when preemption is allowed · b7620ca2
      Nadav Har'El authored
      
      Once page_fault() checks that this is not a fast fixup (see safe_load()),
      we reach the page-fault slow path, which needs to allocate memory or
      even read from disk, and might sleep.
      
      If we ever get such a slow page-fault inside kernel code which has
      preemption or interrupts disabled, this is a serious bug, because the
      code in question thinks it cannot sleep. So this patch adds two
      assertions to verify this.
      
      The preemptable() assertion is easily triggered if stacks are demand-paged
      as explained in commit 41efdc1c (I have
      a patch to solve this, but it won't fit in the margin).
      However, I've also seen this assertion without demand-paged stacks, when
      running all tests together through testrunner.so. So I'm hoping these
      assertions will be helpful in hunting down some elusive bugs we still have.
      
      This patch adds a third use of the "0x200" constant (the nineth bit of
      the rflags register is the interrupt flag), so it replaces them by a
      new symbolic name, processor::rflags_if.
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      b7620ca2
  9. Dec 10, 2013
  10. Dec 08, 2013
  11. Dec 05, 2013
    • Glauber Costa's avatar
      sched: remove on_thread_stack · 9bd939f8
      Glauber Costa authored
      
      no users in tree.
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      9bd939f8
    • Glauber Costa's avatar
      apic: fix allbutself delivery mode · 8a48cb55
      Glauber Costa authored
      
      Our APIC code is so wrong, but so wrong, that it even produce incorrect
      results.  X2APIC is fine, but XAPIC is using xapic::ipi() for all its
      interrupts. The problem with that, is that the costumary place for "vector" is
      inverted in the case of allbutself delivery mode, and therefore, we're sending
      these IPIs to God Knows Where - not to the processors, that is for sure.
      As a result, we would spin waiting for IRQ acks that would never arrive.
      
      I could invert and reorganize the parameters and comment this out, but I've
      decided it is a lot clearer just to open code it. Also, there is no need at all
      to set ICR2 for allbutself, because the destination is already embedded in the
      firing mode.
      
      One issue: NMI is copied over because it is also wrong by the same reasons, so
      I fixed. But I don't have a test case for this.
      
      Fixes #110
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      8a48cb55
  12. Dec 04, 2013
  13. Dec 03, 2013
  14. Dec 01, 2013
  15. Nov 26, 2013
    • Nadav Har'El's avatar
      Reduce number of unnecessary sections in our executable · 03aaf6b8
      Nadav Har'El authored
      
      This patch resolves issue #26. As you can see with "objdump -h
      build/release/loader.elf", our executable had over a thousand (!)
      separate sections, most of them should really be merged.
      We already started doing this in arch/x64/loader.ld, but didn't
      complete the work.
      
      This patch merges all the ".gcc_except_table.*" sections into one,
      and all the ".data.rel.ro.*" sections into one. After this merge,
      we are left with just 52 sections, instead of more than 1000.
      
      The default linker script (run "ld --verbose" to see it) also does
      similar merges, so there's no reason why we shouldn't.
      
      By reducing the number of ELF sections (each comes with a name, headers,
      etc.), this patch also reduces the size of our loader-stripped.elf
      by about 140K.
      
      Fixes #26.
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      03aaf6b8
    • Dmitry Fleytman's avatar
      xen: move per-cpu interrupt threads to .percpu section · 63d2e472
      Dmitry Fleytman authored
      
      Bug fixed by this patch made OSv crash on Xen during boot.
      The problem started to show up after commit:
      
        commit ed808267
        Author: Nadav Har'El <nyh@cloudius-systems.com>
        Date:   Mon Nov 18 23:01:09 2013 +0200
      
            percpu: Reduce size of .percpu section
      
      Signed-off-by: default avatarDmitry Fleytman <dmitry@daynix.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      63d2e472
  16. Nov 21, 2013
  17. Nov 19, 2013
    • Nadav Har'El's avatar
      percpu: Reduce size of .percpu section · ed808267
      Nadav Har'El authored
      
      This patch reduces the size of the .percpu section 64-fold from about
      5 MB to 70 KB, and solves issue #95.
      
      The ".percpu" section is part of the .data section of our executable
      (loader-stripped.elf). In our 15 MB executable, roughly 7 MB is text
      (code), and 7 MB is data, and out of that, a whopping 5 MB is the
      ".percpu" section. The executable is read in real mode, and this is
      especially slow on Amazon EC2, hence our wish to make the executable
      as small as possible.
      
      The percpu section starts with all the PERCPU variables defined in the
      program. We have about 70 KB of those, and believe it or not, most of
      this 70 KB is just a single variable, the 65K dynamic_percpu_buffer
      (see percpu.cc).
      
      But then, we need a copy of these variables for each CPU. The unpatched
      code duplicated this 70KB section 64 times in the executable file (!),
      and then used these memory locations for up-to-64 cpus. But there is
      no reason to duplicate this data in the executable! All we need to do
      is to dynamically allocate a copy of this section for each CPU, and
      this is what this patch does.
      
      This patch removes about 5 MB from our executable: After this patch,
      our loader-stripped.elf is just 9.7 MB, and its data section's size is
      just 2.8 MB.
      
      Reviewed-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      ed808267
  18. Nov 12, 2013
  19. Nov 11, 2013
  20. Nov 07, 2013
  21. Nov 04, 2013
Loading