Skip to content
Snippets Groups Projects
  1. Jul 17, 2013
  2. Jul 15, 2013
  3. Jul 12, 2013
  4. Jul 11, 2013
    • Avi Kivity's avatar
      Merge branch 'v2p-debug-3' · b49f066c
      Avi Kivity authored
      Fix various issues with the debug allocator.
      b49f066c
    • Avi Kivity's avatar
      mmu: fix map_file() deadlock · 8919c66a
      Avi Kivity authored
      map_file() takes the vm lock, then calls read() to pre-fault the data.
      However read() may cause allocations, which then require the vm lock as well.
      
      Fix by faulting in the data after dropping the lock.
      8919c66a
    • Avi Kivity's avatar
      memory: let the debug allocator mimic the standard allocator more closely · 1ea5672f
      Avi Kivity authored
      The standard allocator returns page-aligned addresses for large allocations.
      Some osv code incorrectly relies on this.
      
      While we should fix the incorrect code, for now, adjust the debug allocator
      to return aligned addresses.
      
      The debug allocator now uses the following layout:
      
        [header page][guard page][user data][pattern tail][guard page]
      1ea5672f
    • Avi Kivity's avatar
      virtio: explicitly request contiguous memory for the virtio ring · 79aa5d28
      Avi Kivity authored
      Required by the virtio spec.
      79aa5d28
    • Avi Kivity's avatar
      memory: add alloc_phys_contiguous_aligned() API · b15db045
      Avi Kivity authored
      Virtio and other hardware needs physically contiguous memory, beyond one page.
      It also requires page-aligned memory.
      Add an explicit API for contiguous and aligned memory allocation.
      
      While our default allocator returns physically contiguous memory, the debug
      allocator does not, causing virtio devices to fail.
      b15db045
    • Avi Kivity's avatar
    • Dor Laor's avatar
      Move from a request array approach back to allocation. · 5bcb95d9
      Dor Laor authored
      virtio_blk pre-allocates requests into a cache to avoid re-allocation
      (possibly an unneeded optimization with the current allocator).  However,
      it doesn't take into account that requests can be completed out-of-order,
      and simply reuses requests in a cyclic order. Noted by Avi although
      I had it made using a peak into the index ring but its too complex
      solution. There is no performance degradation w/ smp due to the good
      allocator we have today.
      5bcb95d9
    • Nadav Har'El's avatar
      Fix socket poll() deadlock · 1e65eb54
      Nadav Har'El authored
      In commit 7ecbf29f I added to the
      poll_install() stage of poll() a check of the current state of the file -
      to avoid the sleep if the file became ready before we managed to "install"
      its poll request.
      
      However, I wrongly believed it was necessary to put this check inside
      the FD_LOCK together with the request installation. In fact, it doesn't
      need to be in the same lock - all we need is for the check to happen
      *after* the installation. The call to fo_poll() doesn't need to be in
      the same FD_LOCK or even in an FD_LOCK at all.
      
      Moreover, as it turns out, it must NOT be in an FD_LOCK() because this
      results in a deadlock when polling sockets, caused by two different
      code paths taking locks in opposite order:
      
      1. Before this fix, poll() took FD_LOCK and called fo_poll() which
         called sopoll_generic() which took a SOCKBUF_LOCK
      
      2. In the wake path, SOCKBUF_LOCK was taken, then so_wake_poll()
         is called which calls poll_wake() which takes FD_LOCK.
      1e65eb54
    • Nadav Har'El's avatar
      Fix hang in virtio_driver::wait_for_queue · 8ebb1693
      Nadav Har'El authored
      virtio_driver::wait_for_queue() would often hang in a memcached and
      mc_benchmark workload, waiting forever for received packets although
      these *do* arrive.
      
      As part of the virtio protocol, we need to set the host notification
      flag (we call this, somewhat confusingly, queue->enable_interrupts())
      and then check if there's anything in the queue, and if not, wait
      for the interrupt.
      
      This order is important: If we check the queue and only then set the
      notification flag, and data came in between those, the check will be
      empty and an interrupt never sent - and we can wait indefinitely for
      data that has already arrived.
      
      We did this in the right order, but the host code, running on a
      different CPU, might see memory accesses in a different order!
      We need a memory fence to ensure that the same order is also seen
      on other processors.
      
      This patch adds a memory fence to the end of the enable_interrupts()
      function itself, so we can continue to use it as before in
      wait_for_queue(). Note that we do *not* add a memory fence to
      disable_interrupts() - because no current use (and no expected use)
      cares about the ordering of disable_interrupts() vs other memory
      accesses.
      8ebb1693
    • Nadav Har'El's avatar
      Fix missed wakeups in so_wake_poll · fd28e12d
      Nadav Har'El authored
      This patch fixes two bugs in so_wake_poll(), which caused us missing
      some poll wakeups, resulting in poll()s that never wake up. This can be
      seen as a hang in the following simple loop exercising memcached:
               (while :; do; echo "stats" | nc 192.168.122.100 11211; done)
      
      The two fixes are:
      
      1. If so_wake_poll() decides *not* to call poll_wake() - because it sees
         zero data on this packet - it mustn't reset the SB_SEL flag on the
         socket, or we will ignore the next event even when it does have data.
      
      2. To see if the socket is readable, we need to call soreadable(), not
         soreadabledata() - the former adds the connection close event to the
         readability. See sopoll_generic(), which also sets a readability
         event in that case.
      fd28e12d
Loading