Skip to content
Snippets Groups Projects
  1. May 08, 2014
  2. May 07, 2014
  3. May 05, 2014
  4. May 04, 2014
  5. May 03, 2014
  6. Apr 29, 2014
  7. Apr 28, 2014
  8. Apr 25, 2014
    • Tomasz Grabiec's avatar
      net: log packets going through loopback and virtio-net. · f30ba40d
      Tomasz Grabiec authored
      
      There was no way to sniff packets going through OSv's loopback
      interface. I faced a need to debug in-guest TCP traffic. Packets are
      logged using tracing infrastructure. Packet data is serialized as
      sample data up to a limit, which is currently hardcoded to 128 bytes.
      
      To enable capturing of packets just enable tracepoints named:
        - net_packet_loopback
        - net_packet_eth
      
      Raw data can be seen in `trace list` output. Better presentation
      methods will be added in the following patches.
      
      This may also become useful when debugging network problems in the
      cloud, as we have no ability to run tcpdump on the host there.
      
      Signed-off-by: default avatarTomasz Grabiec <tgrabiec@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      f30ba40d
    • Tomasz Grabiec's avatar
      trace: support for serializing variable-length sequences of bytes · 2d795f99
      Tomasz Grabiec authored
      
      Tracepoint argument which extends 'blob_tag' will be interpreted as a
      range of byte-sized values. Storage required to serialize such object
      is proportional to its size.
      
      I need it to implement storage-fiendly packet capturing using tracing layer.
      
      It could be also used to capture variable length strings. Current
      limit (50 chars) is too short for some paths passed to vfs calls. With
      variable-length encoding, we could set a more generous limit.
      
      Signed-off-by: default avatarTomasz Grabiec <tgrabiec@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      2d795f99
    • Avi Kivity's avatar
      memory: add facility to indicate that a thread is a temporarily a reclaimer · cfba1a5e
      Avi Kivity authored
      
      We already have a facility that to indicate that a thread is a reclaimer
      and should be allowed to allocate reserve memory (since that memory will be
      used to free memory).  Extend it to allow indicating that a particular
      code section is used to free memory, not the entire thread.
      
      Reviewed-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      cfba1a5e
    • Gleb Natapov's avatar
      pagecache: change locking between mmu and ARC · 4792e60c
      Gleb Natapov authored
      
      Now vma_list_mutex is used to protect against races between ARC buffer
      mapping by MMU and eviction by ZFS. The problem is that MMU code calls
      into ZFS with vma_list_mutex held, so on that path all ZFS related locks
      are taken after vma_list_mutex. An attempt to acquire vma_list_mutex
      during ARC buffer eviction, while many of the same ZFS locks are already
      held, causes deadlock. It was solved by using trylock() and skipping an
      eviction if vma_list_mutex cannot be acquired, but it appears that some
      mmapped buffers are destroyed not during eviction, but after writeback and
      this destruction cannot be delayed. It calls for locking scheme redesign.
      
      This patch introduce arc_lock that have to be held during access to
      read_cache. It prevents simultaneous eviction and mapping. arc_lock should
      be the most inner lock held on any code path. Code is change to adhere to
      this rule. For that the patch replaces ARC_SHARED_BUF flag by new b_mmaped
      field. The reason is that access to b_flags field is guarded by hash_lock
      and it is impossible to guaranty same order between hash_lock and arc_lock
      on all code paths. Dropping the need for hash_lock is a nice solution.
      
      Signed-off-by: default avatarGleb Natapov <gleb@cloudius-systems.com>
      4792e60c
    • Gleb Natapov's avatar
      mmu: populate pte in page_allocator · 31d939c7
      Gleb Natapov authored
      
      Currently page_allocator return a page to a page mapper and the later
      populates a pte with it. Sometimes page allocation and pte population
      needs to be appear atomic though. For instance in case of a pagecache
      we want to prevent page eviction before pte is populated since page
      eviction clears pte, but if allocation and mapping is not atomic pte
      can be populated with stale data after eviction. With current approach
      very wide scoped lock is needed to guaranty atomicity. Moving pte
      population into page_allocator allows for much simpler locking.
      
      Signed-off-by: default avatarGleb Natapov <gleb@cloudius-systems.com>
      31d939c7
    • Gleb Natapov's avatar
      pagecache: track ARC buffers in the pagecache · 4fd8693a
      Gleb Natapov authored
      
      Current code assumes that for the same file and same offset ZFS will
      always return same ARC buffer, but this appears to be not the case.
      ZFS may create new ARC buffer while an old one is undergoing writeback.
      It means that we need to track mapping between file/offset and mmapped
      ARC buffer by ourselves. It's exactly what this patch is about. It adds
      new kind of cached page that holds pointers to an ARC buffer and stores
      them in new read_cache map.
      
      Signed-off-by: default avatarGleb Natapov <gleb@cloudius-systems.com>
      4fd8693a
    • Gleb Natapov's avatar
    • Gleb Natapov's avatar
      mmu: separate page unmapping and freeing. · e9756adc
      Gleb Natapov authored
      
      Unmap page as soon as possible instead of waiting for max_pages to
      accumulate. Will allow to free pages outside of vma_list_mutex in the
      feature.
      
      Signed-off-by: default avatarGleb Natapov <gleb@cloudius-systems.com>
      e9756adc
  9. Apr 24, 2014
  10. Apr 22, 2014
  11. Apr 20, 2014
    • Avi Kivity's avatar
      virtio: fix virtio-blk under debug allocator · a888df1a
      Avi Kivity authored
      
      The debug allocator can allocate non-contiguous memory for large requests,
      but since b7de9871 it uses only one sg entry for the entire buffer.
      
      One possible fix is to allocate contiguous memory even under the debug
      allocator, but in the future we may wish to allow discontiguous allocation
      when not enough contiguous space is available.  So instead we implement
      a virt_to_phys() variant that takes a range, and outputs the physical
      segments that make it up, and use that to construct a minimal sg list
      depending on the input.
      
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      a888df1a
  12. Apr 17, 2014
  13. Apr 16, 2014
  14. Apr 15, 2014
Loading