Skip to content
Snippets Groups Projects
  1. Dec 10, 2013
  2. Dec 09, 2013
  3. Dec 08, 2013
    • Glauber Costa's avatar
      tests: add test for thread completion · d213c3fe
      Glauber Costa authored
      
      That test goes together with thread detach, but I am also calling joins
      to make sure we're not breaking them. It is unfortunate that this is quite
      non-deterministic and we can't really surely test for failure. But on the
      flip side, it did help me catch a couple of bugs in my implementation. So
      it will eventually explode somewhere if a bug appears.
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      d213c3fe
    • Glauber Costa's avatar
      sched: implement pthread_detach · afcf4735
      Glauber Costa authored
      
      I needed to call detach in a test code of mine, and this is isn't implemented.
      The code I wrote to use it may or may not stay in the end, but nevertheless,
      let's implement it.
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      afcf4735
    • Glauber Costa's avatar
      sched: standardize call to _cleanup · d754d662
      Glauber Costa authored
      
      set_cleanup is quite a complicated piece of code. It is very easy to get it to
      race with other thread destruction sites, which was made abundantly clear when
      we tried to implement pthread detach.
      
      This patch tries to make it easier, by restricting how and when set_cleanup can
      be called. The trick here is that currently, a thread may or may not have a
      cleanup function, and through a call to set_cleanup, our decision to cleanup
      may change.
      
      From this point on, set_cleanup will only tell us *how* to cleanup. If and
      when, is a decision that we will make ourselves. For instance, if a thread
      is block-local, the destructor will be called by the end of the block. In
      that case, the _cleanup function will be there anyhow: we'll just not call
      it.
      
      We're setting here a default cleanup function for all created threads, that
      just deletes the current thread object. Anything coming from pthread will try
      to override it by also deleting the pthread object. And again, it is important
      to node that they will set up those cleanup function unconditionally.
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      d754d662
    • Glauber Costa's avatar
      sched: Use an integer for thread ids · 5c652796
      Glauber Costa authored
      
      Linux uses a 32-bit integer for pid_t, so let's do it as well. This is because
      there are function in which we have to return our id back to the application.
      One application is gettid, that we already have in the tree.
      
      Theoretically, we could come up with a mapping between our 64-bit id and the
      Linux one, but since we have to maintain the mapping anyway, we might as well
      just use the Linux pids as our default IDs. The max size for that is 32-bit. It
      is not enough if we're just allocating pids by bumping the counter, but again,
      since we will have to maintain the bitmaps, 32-bit will allow us as much as 4
      billion PIDs.
      
      avi: remove unneeded #include
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      5c652796
    • Glauber Costa's avatar
      xen: disable pvclock for more than 32 CPUs · 0ddd6ef1
      Glauber Costa authored
      
      Xen's shared info contains hardcoded space for only 32 CPUs.  Because we use
      those structure to derive timing information, we would be basically accessing
      random memory after that. This is very hard to test and trigger, so what I'd
      did to demonstrate I was right (although that wasn't really needed, math could
      be used for that...) was to print the first timing information a cpu would
      produce. I could verify that the timing on CPUs > 32 was behind in time than
      the time produced in CPUs < 32.
      
      It is possible to move the vcpu area to a different location, but this is a
      relatively new feature of the Xen Hypervisor: Amazon won't support it. So
      we need a disable path anyway. I will open up an issue for somebody to implement
      that support eventually.
      
      Another user of the vcpu structure is interrupts. But for interrupts the story
      is easier, since we can select which CPUs we can take interrupts at, and only
      take them in the first 32 CPUs. In any case, we're taking them all in CPU0 now,
      so already under control
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      0ddd6ef1
    • Glauber Costa's avatar
      sched: initialize clock later · 1d31d9c3
      Glauber Costa authored
      
      Right now we are taking a clock measure very early for cpu initialization.
      That forces an unnecessary dependency between sched and clock initializations.
      
      Since that lock is used to determine for how long the cpu has been running, we
      can initialize the runtime later, when we init the idle thread. Nothing should
      be running before it. After doing this, we can move the sched initialization
      a bit earlier.
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      1d31d9c3
    • Glauber Costa's avatar
      xen: int vs long issues - OSv side · 1bbe05dd
      Glauber Costa authored
      
      It seems that we also had problems with our own code for int vs long
      issues. I am really surprised that the C++ compiler didn't throw any
      warnings for this since all word sizes are quite explicit. In any case,
      this seems to be the missing piece for xen booting with many CPUs.
      
      It boots fine now with up to 32 CPUs. After that, other problems start
      to appear.
      
      Fixes #113
      
      Signed-off-by: default avatarGlauber Costa <glommer@cloudius-systems.com>
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      1bbe05dd
    • Raphael S. Carvalho's avatar
    • Raphael S. Carvalho's avatar
      vfs: Fix duplicate in-memory vnodes · e4aad1ba
      Raphael S. Carvalho authored
      
      Currently, namei() does vget() unconditionally if no dentry is found.
      This is wrong because the path can be a hard link that points to a vnode
      that's already in memory.
      
      To fix the problem:
      
        - Use inode number as part of the hash in vget()
      
        - Use vn_lookup() in vget() to make sure we have one vnode in memory
          per inode number.
      
        - Push the vget() calls down to individual filesystems and make
          VOP_LOOKUP return an vnode
      
        - Drop lock in vn_lookup() and assert that vnode_lock is held.
      
      Signed-off-by: default avatarPekka Enberg <penberg@cloudius-systems.com>
      Signed-off-by: default avatarRaphael S. Carvalho <raphaelsc@cloudius-systems.com>
      Signed-off-by: default avatarAvi Kivity <avi@cloudius-systems.com>
      e4aad1ba
  4. Dec 07, 2013
Loading