Skip to content
Snippets Groups Projects
  1. Dec 11, 2013
    • Raphael S. Carvalho's avatar
    • Nadav Har'El's avatar
      Rename blacklisted tests · 4e4e191f
      Nadav Har'El authored
      
      Rename blacklisted tests, from tst-wake.cc et al. to misc-wake.cc.
      
      The different name will cause these tests not to be automatically run
      by "make check" - without needing the separate blacklist in test.py
      (which this patch deletes).
      After this patch, testrunner.so will also only run tests called tst-*,
      so will not run the misc-* tests.
      
      The misc-* tests can still be run manually, e.g.,
        run.py -e tests/misc-mutex.so
      
      In addition to the previously blacklisted tests, this patch "blacklists"
      (renames) a few additional tests which fail quickly, but test.py didn't
      know because they didn't use the word "fail". An example is tst-schedule.so,
      which existed immediately when not run on 1 vcpu. So this patch also
      renames it to misc-schedule.so, so "make check" or testrunner.so won't
      run this test.
      
      Note that after this patch, testrunner.so is a new way to run all tests,
      but it isn't working well yet because it still exposes new bugs that do not
      exist in the separate tests (depending on your view point, this might be
      considered a feature, not a bug, in testrunner.so...).
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      4e4e191f
    • Pekka Enberg's avatar
      tst-fs-link.so: Use mktemp() for path names · df4a7bd2
      Pekka Enberg authored
      
      Using hard-coded path names is problematic because other test cases may
      use the same path names and forget to clean up after them.
      
      Make tst-fs-link.so more robust by using mktemp() to generate unique
      path names.
      
      Reviewed-by: default avatarTomasz Grabiec <tgrabiec@gmail.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      df4a7bd2
    • Glauber Costa's avatar
      tests: fix threads being destroyed earlier. · b0dc3f1a
      Glauber Costa authored
      
      The last part of the standard thread tests created 4 threads and calls the
      detach of one from the body of the other. They live in the same block to
      guarantee that they will all be destroyed more or less at the same time (we
      expect). Avi, however, demonstrated that a mistake prevents that from being
      the actual case:
      
          t1 starts to run
          t2 starts to run
          t3 starts to run
          t4 starts to run
          t4 is detached
          t4 is destroyed (ok)
          t3 is destroyed. wasn't detached or join, to terminate
          t1, t2, t3 are detached, but too late
      
      This introduces a simple wait mechanism to avoid having the threads
      terminated after the block is gone.
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      b0dc3f1a
  2. Dec 10, 2013
  3. Dec 09, 2013
  4. Dec 08, 2013
  5. Dec 06, 2013
  6. Dec 03, 2013
  7. Dec 01, 2013
  8. Nov 27, 2013
    • Nadav Har'El's avatar
      Test: Small bugfix for tst-loadbalance · 3f494068
      Nadav Har'El authored
      
      Add missing join() in tst-loadbalance, to avoid rare crashes during the
      test.
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      3f494068
    • Nadav Har'El's avatar
      Test for scheduler's single-CPU fairness. · a8c2fea7
      Nadav Har'El authored
      
      This patch adds tst-scheduler.cc, containing a few tests for the fairness
      of scheduling of several threads on one CPU (for scheduling issues involving
      load-balancing across multiple CPUs, check out the existing tst-loadbalance).
      
      The test is written in standard C++11, so it can be compiled and
      run on both Linux and OSv, to compare their scheduler behaviors.
      It is actually more a benchmark then a test (it doesn't "succeed" or "fail").
      
      The test begins with several tests of the long-term fairness of the
      schduler when threads of different or identical priorities are run for
      10 seconds, and we look at how much work each thread got done in those
      10 seconds. This test only works on OSv (which supports float priorities).
      
      The second part of the test again tests long-term fairness of the scheduler
      when all threads have the default priority (so this test is standard C++11):
      We run a loop which takes (when run alone) 10 seconds, on 2 or 3
      threads in parallel. We expect to see that all 2 or 3 threads
      finish at (more-or-less) exactly the same time - after 20 or 30
      seconds. Both OSv and Linux pass this test with flying colors.
      
      The third part of the test runs two different threads concurrently:
       1. One thread wants to use all available CPU to loop for 10 seconds.
       2. The second thread wants to loop in an amount that takes N
          milliseconds, and then sleep for N milliseconds, and so on,
          until completing the same number of loop iterations that (when run
          alone) takes 10 seconds.
      
      The "fair" behavior of the this test is that both threads get equal
      CPU time and finish together: Thread 2 runs for N milliseconds, then
      while it is sleeping for N more, Thread 1 gets to run.
      This measure this for N=1 through 32ms. In OSv's new scheduler, indeed both
      threads get an almost fair share (with N=32ms, one thread finishes in 19
      seconds, the second in 21.4 seconds; we don't expect total fairness because
      of the runtime decay).
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      a8c2fea7
    • Avi Kivity's avatar
      tst-tcp: set SO_REUSEADDR · 63a99d22
      Avi Kivity authored
      
      Allows longer tests to be run.
      
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      63a99d22
  9. Nov 26, 2013
  10. Nov 25, 2013
    • Pekka Enberg's avatar
      tests: mincore() tests for demand paging · 20aad632
      Pekka Enberg authored
      
      As suggested by Nadav, add tests for mincore() interraction with demand
      paging.
      
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      20aad632
    • Pekka Enberg's avatar
      tests: Anonymous demand paging microbenchmark · d4bcf559
      Pekka Enberg authored
      
      This adds a simple mmap microbenchmark that can be run on both OSv and
      Linux.  The benchmark mmaps memory for various sizes and touches the
      mmap'd memory in 4K increments to fault in memory.  The benchmark also
      repeats the same tests using MAP_POPULATE for reference.
      
      OSv page faults are slightly slower than Linux on first iteration but
      faster on subsequent iterations after host operating system has faulted
      in memory for the guest.
      
      I've included full numbers on 2-core Sandy Bridge i7 for a OSv guest,
      Linux guest, and Linux host below:
      
        OSv guest
        ---------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.004  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.007  0.000
          64 0.013  0.000
         128 0.024  0.000
         256 0.052  0.001
         512 0.229  0.002
        1024 0.587  0.005
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.019  0.001
         256 0.036  0.001
         512 0.069  0.002
        1024 0.137  0.005
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.000
         256 0.039  0.001
         512 0.087  0.002
        1024 0.138  0.005
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.012  0.000
         128 0.025  0.001
         256 0.040  0.001
         512 0.082  0.002
        1024 0.138  0.005
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.012  0.000
         128 0.028  0.001
         256 0.040  0.001
         512 0.082  0.002
        1024 0.166  0.005
      
        Linux guest
        -----------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.001  0.000
           4 0.002  0.000
           8 0.003  0.000
          16 0.005  0.000
          32 0.008  0.000
          64 0.015  0.000
         128 0.151  0.001
         256 0.090  0.001
         512 0.266  0.003
        1024 0.401  0.006
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.009  0.000
         128 0.019  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.144  0.006
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.019  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.143  0.006
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.001
         256 0.038  0.001
         512 0.073  0.003
        1024 0.143  0.006
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.144  0.006
      
        Linux host
        ----------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.009  0.000
         128 0.019  0.001
         256 0.035  0.001
         512 0.152  0.003
        1024 0.286  0.011
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.192  0.003
        1024 0.334  0.011
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.194  0.003
        1024 0.329  0.011
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.036  0.001
         512 0.138  0.003
        1024 0.341  0.011
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.135  0.002
        1024 0.324  0.011
      
      Reviewed-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      d4bcf559
  11. Nov 22, 2013
  12. Nov 21, 2013
  13. Nov 13, 2013
  14. Nov 07, 2013
  15. Nov 01, 2013
  16. Oct 30, 2013
    • Pekka Enberg's avatar
      tests: Remove tst-zfs-simple.so · f966188e
      Pekka Enberg authored
      
      The tst-zfs-simple.so test case has serverd its purpose for bringup.  As
      OSv build relies on working ZFS now, there's no need to run the tests.
      
      Furthermore, we have the full ztest stress test in the tree:
      
        bsd/cddl/contrib/opensolaris/cmd/ztest/ztest.c
      
      which we can use if needed.
      
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      f966188e
  17. Oct 29, 2013
Loading