Skip to content
Snippets Groups Projects
  1. Sep 14, 2013
    • Glauber Costa's avatar
      only run dhcp if we have a network interface · 1f18e2e2
      Glauber Costa authored
      As I have stated previously, what is true for qemu (that we always have
      a user-provided network interface) is not true for Xen. It is quite possible
      that we boot with no network interface at all. In that case, we will get stuck
      asking for an IP that will never come.
      
      This patch takes care to call for dhcp only if our interface is really up. Since
      networking is such a core service, we'll print a message if we can't do that.
      1f18e2e2
    • Glauber Costa's avatar
      initialize console later · bc209ae9
      Glauber Costa authored
      Some time ago I have moved the console initialization a bit earlier, so
      messages could be seen earlier. This has been, however, creating spurious
      problems (1 at each 10 - 15 boots) on Xen HVM. The reason is that the isa
      serial reset code enables interrupts upon reset, and the isa irq interrupt
      will call wake() in the pool thread, which at this point is not yet started.
      
      Since these days we already have simple_write() dealing with the early stuff,
      move it back to where it used to be.
      
      P.S: Dima found a way to make this problem 100 % reproduceable, by queueing
      data in the input line before the console starts. With this patch, the problem
      is gone even if Dima's method is used.
      bc209ae9
    • Nadav Har'El's avatar
      Change "hz" to fix poll() premature timeout · 26a30376
      Nadav Har'El authored
      msleep() measure times in units of 1/hz seconds. We had hz = 1,000,000,
      which gives excellent resolution (microsecond) but a terible range
      (limits msleep()'s timeout to 35 minutes).
      
      We had a program (Cassandra) doing poll() with a timeout of 2 hours,
      which caused msleep to think we gave a negative timeout.
      
      This patch reduces hz to 1,000, i.e., have msleep() operate in the same units
      as poll(). Looking at the code, I don't believe this change will have any
      ill-effects - we don't need higher resolution (freebsd code is used to
      hz=1,000, which is the default there), and the code converts time units to
      hz's correctly, always using the hz macro. The allowed range for timeouts will
      grow to over 24 days - and match poll()'s allowed range.
      26a30376
    • Pekka Enberg's avatar
      libc: add times() stub · ddec52d1
      Pekka Enberg authored
      Needed by JMX.
      ddec52d1
    • Pekka Enberg's avatar
      usr.manifest: add librmi.so · 8b111796
      Pekka Enberg authored
      Fixes the following problem when connecting to the JVM via JMX/RMI:
      
      ERROR 08:22:55,278 Exception in thread Thread[RMI TCP Connection(idle),5,RMI Runtime]
      java.lang.UnsatisfiedLinkError: no rmi in java.library.path
      	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1878)
      	at java.lang.Runtime.loadLibrary0(Runtime.java:849)
      	at java.lang.System.loadLibrary(System.java:1087)
      	at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:67)
      	at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:47)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at sun.rmi.server.MarshalInputStream.<clinit>(MarshalInputStream.java:122)
      	at sun.rmi.transport.StreamRemoteCall.getInputStream(StreamRemoteCall.java:133)
      	at sun.rmi.transport.Transport.serviceCall(Transport.java:142)
      	at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
      	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
      	at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:724)
      8b111796
    • Pekka Enberg's avatar
      usr.manifest: add management.properties · a301051a
      Pekka Enberg authored
      Fixes the following problem when JMX is enabled in the JVM:
      
      Error: Config file not found: /usr/lib/jvm/jre/lib/management/management.properties
      program exited with status 1
      Aborted
      a301051a
    • Or Cohen's avatar
      Added JDWP related libraries (remote debugger) · 02a52874
      Or Cohen authored
      I'm not sure about the target location of libnpt.so, but when it was under JVM
      libraries, the debug agent didn't find it.
      
      You should be able to start the debug agent as recommended with these JVM
      options:
      -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
      
      Remote debugging through your favorite IDE should be enabled.
      02a52874
    • Pekka Enberg's avatar
      Add makefile target for QCOW2 image creation · 656952de
      Pekka Enberg authored
      Add a "qcow2" target to Makefile that uses "qemu-img convert" to build a
      smaller qcow2 image out of the OSv raw image.
      656952de
  2. Sep 12, 2013
  3. Sep 11, 2013
  4. Sep 10, 2013
    • Pekka Enberg's avatar
      gdb: Fix osv mmap memory layout · 9ee0d615
      Pekka Enberg authored
      Fix up memory layout of 'class vma' for 'osv mmap' gdb command.
      9ee0d615
    • Pekka Enberg's avatar
      mmu: Fix file-backed vma splitting · d72b550c
      Pekka Enberg authored
      Commit 3510a5ea ("mmu: File-backed VMAs") forgot to fix vma::split() to
      take file-backed mappings into account. Fix the problem by making
      vma::split() a virtual function and implementing it separately for
      file_vma.
      
      Spotted by Avi Kivity.
      d72b550c
    • Or Cohen's avatar
      Added basic readline configuration · c5c4534c
      Or Cohen authored
      Parsed by JLine (in CRaSH)
      Console should now better understand keys like home/end/arrows
      c5c4534c
    • Or Cohen's avatar
      Merge branch 'stty-for-jni' · 8b0ea169
      Or Cohen authored
      8b0ea169
    • Or Cohen's avatar
      6b548713
    • Nadav Har'El's avatar
      DHCP: Fix crash · 68f4d147
      Nadav Har'El authored
      Rarely (about once every 20 runs) we had OSV crash during boot, in the
      DHCP code. It turns out that the code first sends out the DCHP requests,
      and then creates a thread to handle the replies. When a reply arrives,
      the code wake()s the thread, but on rare occasions the thread hasn't yet
      been set up (still a null pointer) so we have a crash.
      
      Fix this by reversing the order - first create the reply handling thread,
      and only then send the request.
      68f4d147
  5. Sep 09, 2013
  6. Sep 08, 2013
    • Nadav Har'El's avatar
      Scheduler: Fix load-balancer bug · e9f0cf29
      Nadav Har'El authored
      The load_balance() code checks if another CPU has fewer threads in its
      run queue than this thread, and if so, migrates one of this CPU's threads
      to the other CPU.
      
      However, when we count this core's runnable threads, we overcount it by
      1, because as soon as load_balance() goes back to sleep, one of the
      runnable threads will start running. So if this core has just one more
      runnable threads than some remote's core runnable threads, they are
      actually even, so in that case we should *not* migrate a thread.
      
      Overcounting the number of threads on the core running load_balance
      caused bad performance in 2-core and 2-thread SpecJVM: Normally, the
      size of the run queue on each core is 1 (each core is running one of
      the two threads, and on the run queue we have the idle thread). But
      when load_balance runs it sees 2 runnable threads (the idle thread and
      the preempted benchmark thread), and the second core has just 1, so
      it decides to migrate one of its threads to the second CPU. When this
      is over, the second CPU has both benchmark threads, and the first CPU
      has nothing, and this will only be fixed some time later when the
      second CPU's load_balance thread runs, and later the balance will be
      ruined again. All this time that the two threads run on the same CPU
      significantly hurt performance, and on the host's "top" we see qemu
      taking just 120%-150% instead of 200% as it should (and as it does
      after this patch).
      e9f0cf29
    • Nadav Har'El's avatar
      Scheduler: Avoid vruntime jump when clock jumps · 253e4536
      Nadav Har'El authored
      Currently, clock::get()->time() jumps (by system_time(), i.e., the host's
      uptime) at some point during the initialization. This can be a huge jump
      (e.g., a week if the host's uptime is a week). Fixing this jump is hard,
      so we'd rather just tolerate it.
      
      reschedule_from_interrupt() handles this clock jump badly. It calculates
      current_run, the amount of time the current thread has run, to include this
      jump while the thread was running. In the above example, a run time of
      a whole week is wrongly attributed to some thread, and added to its vruntime,
      causing it not to be scheduled again until all other threads yield the
      CPU.
      
      The fix in this patch is to limit the vruntime increase after a long
      run to max_slice (10ms). Even if a thread runs for longer (or just thinks
      it ran for longer), it won't be "penalized" in its dynamic priority more
      than a thread that ran for 10ms. Note that this cap makes sense, as
      cpu::enqueue already enforces a similar limit on the vruntime "bonus"
      of a woken thread, and this patch works toward a similar goal (avoid
      giving one thread a huge bonus because another thread was given a huge
      penalty).
      
      This bug is very visible in the CPU-bound SPECjvm2008 benchmarks, when
      running two benchmark threads on two virtual cpus. As it happens, the
      load_balancer() is the one that gets the huge vruntime increase, so
      it doesn't get to run until no other thread wants to run. Because we start
      with both CPU-bound threads on the same CPU, and these hardly yield the
      CPU (and even more rarely are the two threads sleeping at the same time),
      the load balancer thread on this CPU doesn't get to run, and the two threads
      remain on the same CPU, giving us halved performance (2-cpu performance
      identical to 1-cpu performance) and on the host we see qemu using 100% cpu,
      instead of 200% as expected with two vcpus.
      253e4536
    • Guy Zana's avatar
      a8d3a5ca
    • Guy Zana's avatar
    • Guy Zana's avatar
Loading