- Apr 22, 2013
-
-
Christoph Hellwig authored
-
- Apr 17, 2013
-
-
Nadav Har'El authored
1. pthread_join should allow retval=NULL, in which case the return value is ignored. We therefore need to use void**, not void*&, otherwise passing NULL causes pthread_join to crash in a strange way. 2. pthread_join forgot to delete the object allocated in pthread_create
-
Nadav Har'El authored
mmap/munmap and its cousins mmu::map_*() used to leak the 48-byte "vma" object, twice. The idea to separate reserve() and map() was a good one, but it was implemented incorrectly causing a vma object to be allocated twice per mmap, and both of them were leaked when evacuate() didn't free them. So switched to a simpler implementation, where the "vma" object is internal to mmu.cc, and not used by any of the callers; The struct vma is now properly allocated only once per mmap(), and freed on munmap().
-
Avi Kivity authored
We used rbtree::insert_unique(), which only inserts if the value if unique. But uniqueness is defined in terms of the ordering function, which compares vruntime, not object identity. The result is that if another thread was queued with exactly the same vruntime, then the new thread is not queued. Fix by using rbtree:insert_equal(), which inserts unconditionally. Fixes the callout unit test.
-
Guy Zana authored
-
Guy Zana authored
creates 2 threads, a client and a server that perform the tcp echo service, complete 400 cycles of {accept(),read(),write(),close()} on the server and {socket(),connect(),write(),read(),close()} in the client.
-
Guy Zana authored
-
Guy Zana authored
-
Guy Zana authored
-
Guy Zana authored
-
Guy Zana authored
-
Guy Zana authored
1. fix many races 2. use a mutex to protect access to callout data structure 3. delete callout->c_state and use the original callout->c-flags also update CALLOUT_PENDING and CALLOUT_ACTIVE accordingly. 4. support correct behaviour for callout_pending(), callout_active() and callout_deactivate() 5. implement a mutex version: callout_init_mtx() used by tcp syncache 6. add debug prints
-
Guy Zana authored
-
Guy Zana authored
-
Guy Zana authored
-
Guy Zana authored
these are used by TCP.
-
Guy Zana authored
-
Guy Zana authored
-
Guy Zana authored
connect() fails due to these missing lines.
-
Guy Zana authored
log lines weren't reaching the console in a synchronized way
-
Guy Zana authored
-
Guy Zana authored
previousely, the len argument weren't handled correctly, it should be returned to the user and the sockaddr salen should be initialized with the api when possible.
-
Guy Zana authored
-
Guy Zana authored
the rwlock stub were written before our mutex were recursive, now we just don't need any extra recursive layer in rwlock
-
Guy Zana authored
-
Guy Zana authored
move have more work mark before the handler, during the handler execution another thread may signal the netisr worker thread, avoids a race.
-
Guy Zana authored
-
Guy Zana authored
-
Guy Zana authored
SOCK_RAW should be used with sosend() and not sosend_dgram(), same for soreceive_dgram()
-
Guy Zana authored
-
Guy Zana authored
The bufring was never used by any driver in OSv so it's not useful
-
Nadav Har'El authored
tst-leak.so now enables leak detection, runs a few things of interest, and disables leak detection. So "osv leak show" after running this test will show any leaks. I'm using this to confirm fixing the leaks I'm now trying to fix.
-
Avi Kivity authored
We want to reschedule if we woke a local thread, since it may have higher priority than us. However commit 31572c85 broke this by changing the preemption counter. Fix by adjusting the check to take this into account. Noted by Nadav.
-
Avi Kivity authored
Saves ~17MB.
-
Nadav Har'El authored
-
Avi Kivity authored
-
Avi Kivity authored
handle_incoming_wakeups() may not be called from preemptible context, since it manipulates per-cpu variables. Fix by eliminating these calls. In one call site, the call is removed, since it will be called immediately afterwards with interrupts disabled. In the other call site, push the call into an existing irq disabled region. Fixes livelocks where a thread is placed into an incoming_wakeups queue but has the wrong bit set in incoming_wakeups_mask.
-
Avi Kivity authored
-
Avi Kivity authored
Since wake() manipulates per-cpu variables, we need to disable preemption so the cpu pointers aren't invalidated by migration.
-
Avi Kivity authored
The recording mechanism aligns trace messages; replay needs to follow suit.
-