Skip to content
Snippets Groups Projects
  1. Nov 29, 2013
  2. Nov 27, 2013
    • Nadav Har'El's avatar
      Test for scheduler's single-CPU fairness. · a8c2fea7
      Nadav Har'El authored
      
      This patch adds tst-scheduler.cc, containing a few tests for the fairness
      of scheduling of several threads on one CPU (for scheduling issues involving
      load-balancing across multiple CPUs, check out the existing tst-loadbalance).
      
      The test is written in standard C++11, so it can be compiled and
      run on both Linux and OSv, to compare their scheduler behaviors.
      It is actually more a benchmark then a test (it doesn't "succeed" or "fail").
      
      The test begins with several tests of the long-term fairness of the
      schduler when threads of different or identical priorities are run for
      10 seconds, and we look at how much work each thread got done in those
      10 seconds. This test only works on OSv (which supports float priorities).
      
      The second part of the test again tests long-term fairness of the scheduler
      when all threads have the default priority (so this test is standard C++11):
      We run a loop which takes (when run alone) 10 seconds, on 2 or 3
      threads in parallel. We expect to see that all 2 or 3 threads
      finish at (more-or-less) exactly the same time - after 20 or 30
      seconds. Both OSv and Linux pass this test with flying colors.
      
      The third part of the test runs two different threads concurrently:
       1. One thread wants to use all available CPU to loop for 10 seconds.
       2. The second thread wants to loop in an amount that takes N
          milliseconds, and then sleep for N milliseconds, and so on,
          until completing the same number of loop iterations that (when run
          alone) takes 10 seconds.
      
      The "fair" behavior of the this test is that both threads get equal
      CPU time and finish together: Thread 2 runs for N milliseconds, then
      while it is sleeping for N more, Thread 1 gets to run.
      This measure this for N=1 through 32ms. In OSv's new scheduler, indeed both
      threads get an almost fair share (with N=32ms, one thread finishes in 19
      seconds, the second in 21.4 seconds; we don't expect total fairness because
      of the runtime decay).
      
      Signed-off-by: default avatarNadav Har'El <nyh@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      a8c2fea7
  3. Nov 26, 2013
  4. Nov 25, 2013
    • Amnon Heiman's avatar
      Start up shell and management web in parallel · c29222c6
      Amnon Heiman authored
      
      Start up shell and management web in parallel to make boot faster.  Note
      that we also switch to latest mgmt.git which decouples JRuby and CRaSH
      startup.
      
      Signed-off-by: default avatarAmnon Heiman <amnon@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      c29222c6
    • Pekka Enberg's avatar
      tests: Anonymous demand paging microbenchmark · d4bcf559
      Pekka Enberg authored
      
      This adds a simple mmap microbenchmark that can be run on both OSv and
      Linux.  The benchmark mmaps memory for various sizes and touches the
      mmap'd memory in 4K increments to fault in memory.  The benchmark also
      repeats the same tests using MAP_POPULATE for reference.
      
      OSv page faults are slightly slower than Linux on first iteration but
      faster on subsequent iterations after host operating system has faulted
      in memory for the guest.
      
      I've included full numbers on 2-core Sandy Bridge i7 for a OSv guest,
      Linux guest, and Linux host below:
      
        OSv guest
        ---------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.004  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.007  0.000
          64 0.013  0.000
         128 0.024  0.000
         256 0.052  0.001
         512 0.229  0.002
        1024 0.587  0.005
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.019  0.001
         256 0.036  0.001
         512 0.069  0.002
        1024 0.137  0.005
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.000
         256 0.039  0.001
         512 0.087  0.002
        1024 0.138  0.005
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.012  0.000
         128 0.025  0.001
         256 0.040  0.001
         512 0.082  0.002
        1024 0.138  0.005
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.000  0.000
           4 0.000  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.012  0.000
         128 0.028  0.001
         256 0.040  0.001
         512 0.082  0.002
        1024 0.166  0.005
      
        Linux guest
        -----------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.001  0.000
           2 0.001  0.000
           4 0.002  0.000
           8 0.003  0.000
          16 0.005  0.000
          32 0.008  0.000
          64 0.015  0.000
         128 0.151  0.001
         256 0.090  0.001
         512 0.266  0.003
        1024 0.401  0.006
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.009  0.000
         128 0.019  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.144  0.006
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.019  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.143  0.006
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.001
         256 0.038  0.001
         512 0.073  0.003
        1024 0.143  0.006
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.003  0.000
          32 0.005  0.000
          64 0.010  0.000
         128 0.020  0.001
         256 0.037  0.001
         512 0.072  0.003
        1024 0.144  0.006
      
        Linux host
        ----------
      
        Iteration 1
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.001  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.005  0.000
          64 0.009  0.000
         128 0.019  0.001
         256 0.035  0.001
         512 0.152  0.003
        1024 0.286  0.011
      
        Iteration 2
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.192  0.003
        1024 0.334  0.011
      
        Iteration 3
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.194  0.003
        1024 0.329  0.011
      
        Iteration 4
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.036  0.001
         512 0.138  0.003
        1024 0.341  0.011
      
        Iteration 5
      
             time (seconds)
         MiB demand populate
           1 0.000  0.000
           2 0.000  0.000
           4 0.001  0.000
           8 0.001  0.000
          16 0.002  0.000
          32 0.004  0.000
          64 0.010  0.000
         128 0.018  0.001
         256 0.035  0.001
         512 0.135  0.002
        1024 0.324  0.011
      
      Reviewed-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      d4bcf559
  5. Nov 22, 2013
    • Avi Kivity's avatar
      build: bring back libgcc_s.so · be565320
      Avi Kivity authored
      
      Commit c9e61d4a ("build: link libstdc++, libgcc_s only once") threw
      away libgcc_s.so since we already link with libgcc.a and libgcc_eh.a, which
      provide the same symbols, and since having the same symbols in multiple
      objects violates certain C++ rules.
      
      However, libgcc_eh.a provides certain symbols only as local symbols, which
      means they aren't available to the payload.  This manifests itself in errors
      such as failing to find _Unwind_Resume if an exception is thrown.
      
      (This is likely due to the requirement that mulitple objects linked with
      libgcc_eh.a work together, which also brings some confidence that the ODR
      violations of having two versions of the library won't bite us).
      
      Fix the problem by adding libgcc_s.so to the filesystem and allowing
      the payload to link to it.
      
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      be565320
  6. Nov 21, 2013
  7. Nov 15, 2013
  8. Nov 13, 2013
    • Takuya ASADA's avatar
      OSv module support · 0dcf1f8f
      Takuya ASADA authored
      The idea of the patch is basically described in prevoius post:
      
      https://groups.google.com/d/msg/osv-dev/RL2S3AL9TNE/l4XZJo3-lI0J
      
      Whis this patch, you will be able to install OSv apps into disk image on
      "make all" stage.
      
      These apps does not require to exist in OSv repository, you can install
      apps which is on any git repository or svn repository, or on local
      directory.
      
      You'll need to write a config file to add apps, format of the file is
      JSON.
      
      Here's a sample of the file:
      {
         "modules":[
            {
      	 "name":"osv-mruby",
               "type":"git",
               "path":"https://github.com/syuu1228/osv-mruby.git",
               "branch":"master"
            }
         ]
      }
      
      If you add "module" on config file, make all calls script/module.py.
      
      This scripts perform "git clone" to fetch repository to $(out)/module,
      and invoke "make module" on each module.
      
      "make module" should outputs bootfs.manifest/usr.manifest on module
      directory, the script merge bootfs.manifest.skel/usr.manifest.skel and
      module local manifests to single file
      $(out)/bootfs.manifest/$(out)/usr.manifest.
      
      Here's app Makefile example:
      
        https://github.com/syuu1228/osv-mruby/blob/master/Makefile
      
      
      
      It have "module" target, and the target builds all binaries and
      generates *.manifest.
      
      Signed-off-by: default avatarTakuya ASADA <syuu@dokukino.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      0dcf1f8f
  9. Nov 04, 2013
  10. Oct 23, 2013
  11. Oct 11, 2013
    • Tomasz Grabiec's avatar
      Move mgmt submodule head · 9bb55838
      Tomasz Grabiec authored
      This also requires fixing paths to mgmt jars in build.mk
      and usr.manifest as the version scheme has changed.
      
      git log --format=short 7a4db4e759b..54f4810a7:
      
        commit 54f4810a76fabf955aeea34baefa336abf8b8467
        Author: Tomasz Grabiec <tgrabiec@cloudius-systems.com>
      
            Revert "supporting artifactory publish"
      
        commit 4abf771d146d6cde66f330e6b6ab6ececffb4cdd
        Author: Tomasz Grabiec <tgrabiec@cloudius-systems.com>
      
            mgmt/web: ditch jline-2.7 pulled by jruby-core
      
        commit 3863a3b58b661cd751314966cccb0f6c9835ed4a
        Author: Nadav Har'El <nyh@cloudius-systems.com>
      
            Moved RunJava to io.osv namespace
      
        commit be0717595f45d647062e7a41cc8dd38393c96547
        Author: Ronen Narkis <narkisr@gmail.com>
      
            supporting testing (jruby rake test does not work no matter what)
      
        commit 46e74f6bb886a0c62b06f08559fe2e44efdb8900
        Author: Ronen Narkis <narkisr@gmail.com>
      
            ignoring build
      
        commit 95ff3b70bae877d5d8cf0144853d1a201a0be333
        Author: Ronen Narkis <narkisr@gmail.com>
      
            verfying json existence and giving meaning full error
      
        commit 8b60c4a40aa4bcb7ce08bba600fd9cd6d63e1073
        Author: Ronen Narkis <narkisr@gmail.com>
      
            moving to three digit versioning in order to have a finer grained control on rel
      
        commit ffa7646388cec8d5b138ff4fc28a985c6344824c
        Author: Ronen Narkis <narkisr@gmail.com>
      
            supporting artifactory publish
      
        commit 8855112e2c867b4f855ed28ad9d9982c26bc56a3
        Author: Ronen Narkis <narkisr@gmail.com>
      
            clearing unused repo
      
        commit 3be79eb18b2be7bf1f28aaebd9905ac77945e4e4
        Author: Or Cohen <orc@fewbytes.com>
      
            Migrated ifconfig from previous JS console
      
        commit 287b014cf709e9c46692c59f87b70ebf056114b5
        Author: Or Cohen <orc@fewbytes.com>
      
            Migrated run command from previous CLI
      
        commit 0ffe064d30c1a812d7e853ee568ce45bfc16ed42
        Author: Or Cohen <orc@fewbytes.com>
      
            Added daemonizeIfNeeded helper method for commands
      
        commit 36951a86493c954a2e939648b5060260fac5b539
        Author: Or Cohen <orc@fewbytes.com>
      
            Moved ELFLoader from cloudius.cli to cloudius.util
      9bb55838
  12. Oct 01, 2013
  13. Sep 14, 2013
    • Pekka Enberg's avatar
      usr.manifest: add librmi.so · 8b111796
      Pekka Enberg authored
      Fixes the following problem when connecting to the JVM via JMX/RMI:
      
      ERROR 08:22:55,278 Exception in thread Thread[RMI TCP Connection(idle),5,RMI Runtime]
      java.lang.UnsatisfiedLinkError: no rmi in java.library.path
      	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1878)
      	at java.lang.Runtime.loadLibrary0(Runtime.java:849)
      	at java.lang.System.loadLibrary(System.java:1087)
      	at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:67)
      	at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:47)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at sun.rmi.server.MarshalInputStream.<clinit>(MarshalInputStream.java:122)
      	at sun.rmi.transport.StreamRemoteCall.getInputStream(StreamRemoteCall.java:133)
      	at sun.rmi.transport.Transport.serviceCall(Transport.java:142)
      	at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
      	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
      	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:724)
      8b111796
    • Pekka Enberg's avatar
      usr.manifest: add management.properties · a301051a
      Pekka Enberg authored
      Fixes the following problem when JMX is enabled in the JVM:
      
      Error: Config file not found: /usr/lib/jvm/jre/lib/management/management.properties
      program exited with status 1
      Aborted
      a301051a
    • Or Cohen's avatar
      Added JDWP related libraries (remote debugger) · 02a52874
      Or Cohen authored
      I'm not sure about the target location of libnpt.so, but when it was under JVM
      libraries, the debug agent didn't find it.
      
      You should be able to start the debug agent as recommended with these JVM
      options:
      -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
      
      Remote debugging through your favorite IDE should be enabled.
      02a52874
  14. Sep 13, 2013
  15. Sep 11, 2013
  16. Sep 03, 2013
  17. Sep 01, 2013
  18. Aug 12, 2013
    • Avi Kivity's avatar
      build: link libstdc++, libgcc_s only once · c9e61d4a
      Avi Kivity authored
      Currently we statically link to libstdc++ and libgcc_s, and also dynamically
      link to the same libraries (since the payload requires them).  This causes
      some symbols to be available from both the static and dynamic version.
      
      With the resolution order change introduced by 82513d41, we can
      resolve the same symbol to different addresses at different times.  This
      violates the One Definition Rule, and in fact breaks std::string's
      destructor.
      
      Fix by only linking in the libraries statically.  We use ld's --whole-archive
      flag to bring in all symbols, including those that may be used by the payload
      but not by the kernel.
      
      Some symbols now become duplicates; we drop our version.
      c9e61d4a
  19. Jul 02, 2013
  20. Apr 14, 2013
  21. Apr 09, 2013
Loading