Skip to content
Snippets Groups Projects
  1. Nov 29, 2013
  2. Nov 27, 2013
    • Tomasz Grabiec's avatar
      build.mk: process modules in one rule · c3f1f15c
      Tomasz Grabiec authored
      
      There is a race between "usr.manifest" and "bootfs.manifest" rules
      which both call module.py. The script does complex stuff wrt module
      preparation like fetching module files, calling make, etc. and
      should not be run concurrently.
      
      This change fixes the problem by moving the calls into one rule.
      
      This is not the end of the story, more refactoring will follow.  The
      module.py script should be split into parts, one that fetches modules
      and one that generates manifests. This way the dependencies could be
      made more fine grained and jobs paralellized.
      
      This fixes issue #100.
      
      Signed-off-by: default avatarTomasz Grabiec <tgrabiec@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      c3f1f15c
    • Nadav Har'El's avatar
      Test for scheduler's single-CPU fairness. · a8c2fea7
      Nadav Har'El authored
      
      This patch adds tst-scheduler.cc, containing a few tests for the fairness
      of scheduling of several threads on one CPU (for scheduling issues involving
      load-balancing across multiple CPUs, check out the existing tst-loadbalance).
      
      The test is written in standard C++11, so it can be compiled and
      run on both Linux and OSv, to compare their scheduler behaviors.
      It is actually more a benchmark then a test (it doesn't "succeed" or "fail").
      
      The test begins with several tests of the long-term fairness of the
      schduler when threads of different or identical priorities are run for
      10 seconds, and we look at how much work each thread got done in those
      10 seconds. This test only works on OSv (which supports float priorities).
      
      The second part of the test again tests long-term fairness of the scheduler
      when all threads have the default priority (so this test is standard C++11):
      We run a loop which takes (when run alone) 10 seconds, on 2 or 3
      threads in parallel. We expect to see that all 2 or 3 threads
      finish at (more-or-less) exactly the same time - after 20 or 30
      seconds. Both OSv and Linux pass this test with flying colors.
      
      The third part of the test runs two different threads concurrently:
       1. One thread wants to use all available CPU to loop for 10 seconds.
       2. The second thread wants to loop in an amount that takes N
          milliseconds, and then sleep for N milliseconds, and so on,
          until completing the same number of loop iterations that (when run
          alone) takes 10 seconds.
      
      The "fair" behavior of the this test is that both threads get equal
      CPU time and finish together: Thread 2 runs for N milliseconds, then
      while it is sleeping for N more, Thread 1 gets to run.
      This measure this for N=1 through 32ms. In OSv's new scheduler, indeed both
      threads get an almost fair share (with N=32ms, one thread finishes in 19
      seconds, the second in 21.4 seconds; we don't expect total fairness because
      of the runtime decay).
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      a8c2fea7
  3. Nov 26, 2013
  4. Nov 25, 2013
    • Amnon Heiman's avatar
      Start up shell and management web in parallel · c29222c6
      Amnon Heiman authored
      
      Start up shell and management web in parallel to make boot faster.  Note
      that we also switch to latest mgmt.git which decouples JRuby and CRaSH
      startup.
      
      Signed-off-by: default avatarAmnon Heiman <amnon@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      c29222c6
    • Pekka Enberg's avatar
      tests: Anonymous demand paging microbenchmark · d4bcf559
      Pekka Enberg authored
      
      This adds a simple mmap microbenchmark that can be run on both OSv and
      Linux.  The benchmark mmaps memory for various sizes and touches the
      mmap'd memory in 4K increments to fault in memory.  The benchmark also
      repeats the same tests using MAP_POPULATE for reference.
      
      OSv page faults are slightly slower than Linux on first iteration but
      faster on subsequent iterations after host operating system has faulted
      in memory for the guest.
      
      I've included full numbers on 2-core Sandy Bridge i7 for a OSv guest,
      Linux guest, and Linux host below:
      
        OSv guest
        ---------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.004  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.007  0.000
          64 0.013  0.000
         128 0.024  0.000
         256 0.052  0.001
         512 0.229  0.002
        1024 0.587  0.005
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.019  0.001
         256 0.036  0.001
         512 0.069  0.002
        1024 0.137  0.005
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.000
         256 0.039  0.001
         512 0.087  0.002
        1024 0.138  0.005
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.012  0.000
         128 0.025  0.001
         256 0.040  0.001
         512 0.082  0.002
        1024 0.138  0.005
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.012  0.000
         128 0.028  0.001
         256 0.040  0.001
         512 0.082  0.002
        1024 0.166  0.005
      
        Linux guest
        -----------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.001  0.000
           4 0.002  0.000
           8 0.003  0.000
          16 0.005  0.000
          32 0.008  0.000
          64 0.015  0.000
         128 0.151  0.001
         256 0.090  0.001
         512 0.266  0.003
        1024 0.401  0.006
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.009  0.000
         128 0.019  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.144  0.006
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.019  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.143  0.006
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.001
         256 0.038  0.001
         512 0.073  0.003
        1024 0.143  0.006
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.144  0.006
      
        Linux host
        ----------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.009  0.000
         128 0.019  0.001
         256 0.035  0.001
         512 0.152  0.003
        1024 0.286  0.011
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.192  0.003
        1024 0.334  0.011
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.194  0.003
        1024 0.329  0.011
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.036  0.001
         512 0.138  0.003
        1024 0.341  0.011
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.135  0.002
        1024 0.324  0.011
      
      Reviewed-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      d4bcf559
  5. Nov 21, 2013
  6. Nov 14, 2013
  7. Nov 13, 2013
  8. Nov 11, 2013
  9. Nov 07, 2013
  10. Nov 05, 2013
  11. Nov 04, 2013
  12. Oct 30, 2013
    • Nadav Har'El's avatar
      Simplify host-side of zfs image build · be3e55d6
      Nadav Har'El authored
      
      This patch simplifies the host-side work in the new /usr zfs filesystem
      build process.
      
      Previously, we copied the files to a temporary directory, used "cpio"
      to archive them and sent its output to the guest with "netcat".
      
      With this patch, we no longer have a temporary directory, and do not
      need either cpio or netcat on the build machine.
      
      Rather, mkzfs.py itself, using python (instead of a separate "nc" process),
      connects to the guest and sends it the files - still using the CPIO format.
      
      Rather than arbitrarily sleep for 3 seconds before the host tries to
      connect to the guest (which might not be enough for some, or a waste
      of time for others), with this patch the host looks at the guest's output
      and connects when it sees the message "Waiting for connection".
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      be3e55d6
    • Pekka Enberg's avatar
      tests: Remove tst-zfs-simple.so · f966188e
      Pekka Enberg authored
      
      The tst-zfs-simple.so test case has serverd its purpose for bringup.  As
      OSv build relies on working ZFS now, there's no need to run the tests.
      
      Furthermore, we have the full ztest stress test in the tree:
      
        bsd/cddl/contrib/opensolaris/cmd/ztest/ztest.c
      
      which we can use if needed.
      
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      f966188e
    • Avi Kivity's avatar
      external: update gcc to 4.8.2 · 5a11cc6e
      Avi Kivity authored
      
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      5a11cc6e
  13. Oct 29, 2013
    • Tomasz Grabiec's avatar
      vfs: namei() should return ENOTDIR when component is not a directory · 1fe30840
      Tomasz Grabiec authored
      
      The call to namei("/dir/file/") currently fails with ENOENT when
      "/dir/file" exists. A more standard way is to return ENOTDIR
      instead. This way calls to stat, open, rename, etc. will be in
      line with the POSIX spec.
      
      It is also useful to rename() implementation which needs to
      differentiate behaviour between the case in which target does not
      exist and the case in which it does but the path has trailing slash
      and the last component is not a directory.
      
      In addition to that the check was performed in an inconsistent matter
      - only when dentry lookup failed. This change makes the check
      performed always.
      
      Signed-off-by: default avatarTomasz Grabiec <tgrabiec@cloudius-systems.com>
      1fe30840
    • Avi Kivity's avatar
      build: omit source file from autodependencies · 47d3d232
      Avi Kivity authored
      
      When converting a source file from .c to .cc, make will complain that the
      old source file was missing.  This is annoying, especially when bisecting
      or compile-testing a patch set.
      
      Fix by removing the source file dependency.  Since the dependency is alread
      specified explicitly by the makefile, no information is lost.
      
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      47d3d232
  14. Oct 28, 2013
  15. Oct 25, 2013
  16. Oct 24, 2013
  17. Oct 23, 2013
  18. Oct 17, 2013
  19. Oct 16, 2013
Loading