Skip to content
Snippets Groups Projects
  1. Dec 11, 2013
  2. Dec 10, 2013
  3. Dec 09, 2013
  4. Dec 08, 2013
  5. Dec 06, 2013
  6. Dec 03, 2013
  7. Dec 01, 2013
  8. Nov 27, 2013
    • Nadav Har'El's avatar
      Test: Small bugfix for tst-loadbalance · 3f494068
      Nadav Har'El authored
      
      Add missing join() in tst-loadbalance, to avoid rare crashes during the
      test.
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      3f494068
    • Nadav Har'El's avatar
      Test for scheduler's single-CPU fairness. · a8c2fea7
      Nadav Har'El authored
      
      This patch adds tst-scheduler.cc, containing a few tests for the fairness
      of scheduling of several threads on one CPU (for scheduling issues involving
      load-balancing across multiple CPUs, check out the existing tst-loadbalance).
      
      The test is written in standard C++11, so it can be compiled and
      run on both Linux and OSv, to compare their scheduler behaviors.
      It is actually more a benchmark then a test (it doesn't "succeed" or "fail").
      
      The test begins with several tests of the long-term fairness of the
      schduler when threads of different or identical priorities are run for
      10 seconds, and we look at how much work each thread got done in those
      10 seconds. This test only works on OSv (which supports float priorities).
      
      The second part of the test again tests long-term fairness of the scheduler
      when all threads have the default priority (so this test is standard C++11):
      We run a loop which takes (when run alone) 10 seconds, on 2 or 3
      threads in parallel. We expect to see that all 2 or 3 threads
      finish at (more-or-less) exactly the same time - after 20 or 30
      seconds. Both OSv and Linux pass this test with flying colors.
      
      The third part of the test runs two different threads concurrently:
       1. One thread wants to use all available CPU to loop for 10 seconds.
       2. The second thread wants to loop in an amount that takes N
          milliseconds, and then sleep for N milliseconds, and so on,
          until completing the same number of loop iterations that (when run
          alone) takes 10 seconds.
      
      The "fair" behavior of the this test is that both threads get equal
      CPU time and finish together: Thread 2 runs for N milliseconds, then
      while it is sleeping for N more, Thread 1 gets to run.
      This measure this for N=1 through 32ms. In OSv's new scheduler, indeed both
      threads get an almost fair share (with N=32ms, one thread finishes in 19
      seconds, the second in 21.4 seconds; we don't expect total fairness because
      of the runtime decay).
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      a8c2fea7
    • Avi Kivity's avatar
      tst-tcp: set SO_REUSEADDR · 63a99d22
      Avi Kivity authored
      
      Allows longer tests to be run.
      
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      63a99d22
  9. Nov 26, 2013
  10. Nov 25, 2013
    • Pekka Enberg's avatar
      tests: mincore() tests for demand paging · 20aad632
      Pekka Enberg authored
      
      As suggested by Nadav, add tests for mincore() interraction with demand
      paging.
      
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      20aad632
    • Pekka Enberg's avatar
      tests: Anonymous demand paging microbenchmark · d4bcf559
      Pekka Enberg authored
      
      This adds a simple mmap microbenchmark that can be run on both OSv and
      Linux.  The benchmark mmaps memory for various sizes and touches the
      mmap'd memory in 4K increments to fault in memory.  The benchmark also
      repeats the same tests using MAP_POPULATE for reference.
      
      OSv page faults are slightly slower than Linux on first iteration but
      faster on subsequent iterations after host operating system has faulted
      in memory for the guest.
      
      I've included full numbers on 2-core Sandy Bridge i7 for a OSv guest,
      Linux guest, and Linux host below:
      
        OSv guest
        ---------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.004  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.007  0.000
          64 0.013  0.000
         128 0.024  0.000
         256 0.052  0.001
         512 0.229  0.002
        1024 0.587  0.005
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.019  0.001
         256 0.036  0.001
         512 0.069  0.002
        1024 0.137  0.005
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.000
         256 0.039  0.001
         512 0.087  0.002
        1024 0.138  0.005
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.012  0.000
         128 0.025  0.001
         256 0.040  0.001
         512 0.082  0.002
        1024 0.138  0.005
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.012  0.000
         128 0.028  0.001
         256 0.040  0.001
         512 0.082  0.002
        1024 0.166  0.005
      
        Linux guest
        -----------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.001  0.000
           4 0.002  0.000
           8 0.003  0.000
          16 0.005  0.000
          32 0.008  0.000
          64 0.015  0.000
         128 0.151  0.001
         256 0.090  0.001
         512 0.266  0.003
        1024 0.401  0.006
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.009  0.000
         128 0.019  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.144  0.006
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.019  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.143  0.006
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.001
         256 0.038  0.001
         512 0.073  0.003
        1024 0.143  0.006
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.144  0.006
      
        Linux host
        ----------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.009  0.000
         128 0.019  0.001
         256 0.035  0.001
         512 0.152  0.003
        1024 0.286  0.011
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.192  0.003
        1024 0.334  0.011
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.194  0.003
        1024 0.329  0.011
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.036  0.001
         512 0.138  0.003
        1024 0.341  0.011
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.135  0.002
        1024 0.324  0.011
      
      Reviewed-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      d4bcf559
  11. Nov 22, 2013
  12. Nov 21, 2013
  13. Nov 13, 2013
  14. Nov 07, 2013
  15. Nov 01, 2013
  16. Oct 30, 2013
    • Pekka Enberg's avatar
      tests: Remove tst-zfs-simple.so · f966188e
      Pekka Enberg authored
      
      The tst-zfs-simple.so test case has serverd its purpose for bringup.  As
      OSv build relies on working ZFS now, there's no need to run the tests.
      
      Furthermore, we have the full ztest stress test in the tree:
      
        bsd/cddl/contrib/opensolaris/cmd/ztest/ztest.c
      
      which we can use if needed.
      
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      f966188e
  17. Oct 29, 2013
    • Tomasz Grabiec's avatar
      runjava: Add missing delegation to OSv system classloader · fe1752e5
      Tomasz Grabiec authored
      
      When system classloader is used as a parent some of its protected
      methods are called to lookup resources.
      
      This also adds delegation for all remaining protected and public
      methods.
      
      Signed-off-by: default avatarTomasz Grabiec <tgrabiec@cloudius-systems.com>
      fe1752e5
    • Tomasz Grabiec's avatar
      vfs: fix incorrect rename() behavior with trailing slashes · 06f842f0
      Tomasz Grabiec authored
      Fix 1. Renaming existing directory to existing file when the
      destination path has trailing slash should fail:
      
        rename("/tmp/dir", "/tmp/file/") == -1 && errno == ENOTDIR
      
      Fix 2. Renaming exisitng directory to nonexisting path when the path
      has trailing slash should succeed:
      
        rename("/tmp/dir", "/tmp/im_missing/") == 0
      
      Fix 3. Renaming exisitng file to nonexisting path when the path
      has trailing slash should fail:
      
        rename("/tmp/file", "/tmp/im_missing/") == -1 && errno == ENOTDIR
      
      Reference:
      http://pubs.opengroup.org/onlinepubs/9699919799/functions/rename.html
      
      
      
      This change also adds a bunch of test cases for various conditions
      mentioned in the spec. Note that for some cases where the raname()
      should fail there are discrepancies between Linux and OSv in error
      codes set by the call. To my understanding, the spec is ambiguous.
      There are conditions which match more than one error code
      description. The tests are therefore allowing the call to set error
      code from a set of matching error codes (eg. both EEXIST and ENOTEMPTY
      are allowed when applicable). If we decide that OSv should return
      exactly the same error codes as Linux for the same conditions, this
      can be changed later.
      
      vfs: rename() should fail if paths differ only with trailing slash
      
      This should fail:
      
        rename("/tmp/file", "/tmp/file/")
      
      Signed-off-by: default avatarTomasz Grabiec <tgrabiec@cloudius-systems.com>
      06f842f0
    • Tomasz Grabiec's avatar
      vfs: namei() should return ENOTDIR when component is not a directory · 1fe30840
      Tomasz Grabiec authored
      
      The call to namei("/dir/file/") currently fails with ENOENT when
      "/dir/file" exists. A more standard way is to return ENOTDIR
      instead. This way calls to stat, open, rename, etc. will be in
      line with the POSIX spec.
      
      It is also useful to rename() implementation which needs to
      differentiate behaviour between the case in which target does not
      exist and the case in which it does but the path has trailing slash
      and the last component is not a directory.
      
      In addition to that the check was performed in an inconsistent matter
      - only when dentry lookup failed. This change makes the check
      performed always.
      
      Signed-off-by: default avatarTomasz Grabiec <tgrabiec@cloudius-systems.com>
      1fe30840
Loading