ref: 3ab9f80448b5d80352bed6251b98f41aeee7af09
dir: /notebook/
Let's get venti lookups working. At present, we're not finding the entry in the bucket, even though the score was taken directly from fossil, which I'm typing this into, so it definitely is there. Walk backwards through the logic in oldventi, compare the results, find the problem, document this here. lookupscore in icache.c checks the index cache, then falls back to loadientry if it's not found there. That takes in the index, score, and type, and generates an index entry; let's check that out in /sys/src/cmd/venti/srv/index.c:652... First, it checks the bloom filter; that's used to detect absence early, so we can skip it. Then, it loads the index bucket - double check that we're grabbing the right bucket, then check that we're looking up the entry in the bucket correctly... loadibucket goes through loadibucket1, but nothing happens in loadibucket. loadibucket1 processes the args a bit for loadibucket0, which does the real work. Given the index, the bucket number, an out pointer to which the index section is written if found, an out pointer to which the section-relative bucket number is written if found, an out pointer to which the index bucket is unpacked from the raw bytes, and the mode with which to retrieve the disk block. The absolute bucket index is the hashbits(score, 32) divided by the index divisor. Aside from that, all parameters are passed through from loadibucket exactly. The score is a u8*, and I think I used char* for neoventi. This is quite probably a bug even if it's not the one I'm looking for. Nothing else is of interest from loadientry, parameter-wise. Let's go in order, though. Double check the hash... Given u8* score and the result bits, the hash is calculated as the first four bytes of the score in little-endian order. The maximum length of the hash is 32 bits; if fewer bits are requested, the four bytes are simply shifted over. ...we're using U32GET(score) to retrieve the hash, but score is char*. U32GET only casts the _result_ to u32, not the parameter. Fix that, compare the resulting bucket index... That was definitely _one_ of the issues; this moves us from bucket 4880611 to bucket 4226740. There's likely other stupid type errors in here still >_> Going to preemptively switch to using u8int* for the score everywhere. That might be enough to solve this, and even if not, it will prevent other stupid bugs in the future. Didn't fix this, but I wasn't really expecting it, so it's fine ☺ Idea: maybe write a frontend to venti/srv code that dumps intermediate info for comparison? That should be relatively easy, there's other venti/srv frontends (wrarena, for instance) I can use as a template, and it would it easy to verify. Yeah, that was a good idea % ./6.tester found in bucket 1913977 It COMPLETELY disagrees? Need to double-check unit; venti/tester is reporting section-relative, I don't think I am in neoventi. Nope, not the problem. venti/tester modified reports: found in bucket 1913977, [0, 4880645], 1684300339/880 = 1913977 The index divisor is 880, the 'hash' is 1684300339. neoventi reports 'Looking in bucket 3719531835 / 880 = 4226740'. The hash is wrong?? THE SCORE IS WRONG! >_> We have it in string form, we want it in raw bytes! ...no, actually, we have it in raw bytes. I was embedding it in string form in the tester >_> Oops. With that fixed, % ./6.tester found in bucket 4226740, [0, 4880645], 3719531835/880 = 4226740 We're looking in the right bucket! Yay! That means we're misreading the bucket? Not scanning it correctly? Bucket data is after the u16 entries and u32 magic; IBucketSize in venti/srv is U32Size+U16Size. unpackibucket sets the n to the u16 at the front, and the data to buf+IBucketSize. magic is checked as the u32 after the first u16. We're doing all that right. We're reading the bucket with one entry in it. Probably our bucketlookup that's wrong, then? ...uh. Or... maybe not? Extended the tester to unpack and report the bucket contents: IBucket b; unpackibucket(&b, no->data, sect->bucketmagic); print("bucket entries: %d\n", b.n); int h = bucklook(score, -1, b.data, b.n); print("bucklook: %d\n", h); if(h&1){ h ^= 1; print("Found at offset %d\n", h); } else { print("not found in bucket\n"); } Result: bucket entries: 1 bucklook: 38 not found in bucket venti/srv doesn't see that score, either??? What the hell? vacfs: vacfsopen: no block with score ddb38d3b21f95644b181052dc096ebdf2862dd83/16 exists Yeah, confirmed. Huh??? I copied that score from fossil/l... fossil/last. Are its output scores invalid? ...yep. Oops. Let's try with a new score! 4091f8b8aafc7b8a815f38534b70a171e4ae3e44 taken from fscons. Establish a baseline first, which I should ALWAYS do: vacfs is able to mount that one. okay. venti/tester does not see it, though. Are vac scores not direct venti scores? I'm fairly sure they should be, and a quick skim through vacfs code supports that. Okay, let's try a different approach: venti/tester expansion to loadientry directly, none of the manual bucket futzing: IEntry e; if(loadientry(index, score, -1, &e) < 0){ print("failed to load ientry\n"); } else { print("ientry loaded\n"); } failed to load ientry ...hmmm. Is it possible for data in the latest arena to be unindexed? Perhaps the issue is the assumption that the data should be in the index to begin with. Walking through the source; readlump=>lookupscore, which, if the entry is not in the cache, will loadientry; if loadientry fails, it passes an error up, and readlump fails. ...data in the latest arena has to be loaded into the index on startup, doesn't it? is it injecting it directly into the index cache and leaving it dirty?? Yep, syncindex(). Okay, let's add a syncindex call to venti/tester... That's not working, either. let's go a step higher: try readlump()? failed to read lump: no block with score %V%/4542084 exists what. seterr(EOk, "no block with score %V/%d exists", score, type); Ahhhhhh, %V format handler not installed, so score isn't printed, and it's printing the score as the type. ventifmtinstall(); failed to read lump: no block with score 4091f8b8aafc7b8a815f38534b70a171e4ae3e44/-1 exists much better! Now, copy-paste that score and vacfs it again... % vacfs -h tcp!127.1!13031 vac:4091f8b8aafc7b8a815f38534b70a171e4ae3e44 % ls /n/vac /n/vac/active /n/vac/archive /n/vac/snapshot and lo! This is bullshit! Ah wait, reading the lump before syncindex, let's fix that... if(syncindex(index) < 0) sysfatal("sync failed: %r"); Packet *data = readlump(score, -1, -1, nil); if(data == nil) print("failed to read lump: %r\n"); else print("successfully read lump!\n"); IEntry e; if(loadientry(index, score, -1, &e) < 0){ ... and huh! successfully read lump! failed to load ientry Progress! ...is it being inserted into the lump cache on startup, but not indexed? int cached; Packet *data = readlump(score, -1, -1, &cached); if(data == nil) print("failed to read lump: %r\n"); else print("successfully read lump, cached: %d\n", cached); successfully read lump, cached: 0 The plot thickens! It's NOT in the lump cache! readlump then uses lookupscore to find the index address... which checks the index cache, the loads the ientry. Let's check the icache? IAddr addr; if(lookupscore(score, -1, &addr) == 0) print("found score in index cache at %llud, size %lud, type %d, blocks %d\n", addr.addr, addr.size, addr.type, addr.blocks); else print("not found in index cache\n"); found score in index cache at 75373226395, size 300, type 16, blocks 1 failed to load ientry Yeeeep, there it is. it's inserted into the index cache on startup. That's separate from syncindex, though? Nope, removing syncindex results in it not being in the index cache. Whelp, pretty sure that's a bug in venti/venti: I don't think it will allow you to read the latest data when in read-only mode, where it skips syncindex. Okay, I don't intend to have this behavior, necessarily. Let's just do a full flush. % hget http://127.1:13032/flushicache Note: the index flusher is disgustingly slow. Disk I/O and CPU were both at effectively zero for minutes while it ran. I didn't realize it was _working_ it's so slow O_o. venti/read -h tcp!127.1!14011 vac:4091f8b8aafc7b8a815f38534b70a171e4ae3e44 kill 108118 Initializing neoventi build 1... reading 1416 map entries found 1416 arenas Parsing index v1... reading 1 map entries reading 1416 map entries received a connection at /net/tcp/18, fd 7 handling read of score 4091f8b8aafc7b8a815f38534b70a171e4ae3e44 Looking in bucket 1083308216 / 880 = 1231032 relative bucket: 1231032, section [0, 4880645] entries: 3 arena arenas0.140, start 212543899, size 17 WE HAVE THE DATA!!!!! Okay! We're able to read the data now. All RPC calls start with the RPC size, then message type, then tag. For Rread, the rest of the message is the data. dbuf = malloc(4 + size); dbuf[0] = (size+2)>>8; dbuf[1] = (size+2) & 0xFF; dbuf[2] = VtRread; dbuf[3] = buf[3]; vtreadarena(arena, addr, dbuf+4, &size); print("WE HAVE THE DATA!!!!! size: %ud\n", size); // just slam the dbuf packet over the wire! if(write(conn.fd, dbuf, size + 4) != size+4){ print("failed to write data\n"); ... venti/read: could not read block: read asked for 4091f8b8aafc7b8a815f38534b70a171e4ae3e44 got db1a96f91d93fca56f68c352a428a5fbdb71f0d5 Oh, lovely. Bucket's good, but actual data read's busted. Back to venti/tester... Packet *data = readlump(score, -1, -1, &cached); if(data == nil) print("failed to read lump: %r\n"); else print("successfully read lump, cached: %d, size: %lud\n", cached, packetsize(data)); successfully read lump, cached: 0, size: 300 okay. back to readlump... Looks up the score as an IAddr, which is... an address into the index? It then uses the index to map that to an address inside of an arena, before actually loading the clump from that arena. Our vtreadlookup combines these steps: given a score, it looks it up in the index, finds the arena+address, and yields that info all to the caller. So, let's compare the intermediate results. First off, lookupscore() already gives us the right size. venti/tester already exposes it: if(lookupscore(score, -1, &addr) == 0) print("found score in index cache at %llud, size %lud, type %d, blocks %d\n", addr.addr, addr.size, addr.type, addr.blocks); found score in index cache at 75374255700, size 300, type 0, blocks 1 lookupscore() grabs the score from loadientry(), which is just loading the bucket and finding the entry in the bucket. Found the issue while reading the code, no test needed :P *addr = U64GET((buf + 6 + (entry * IEntrySize) + 26)); *size = U16GET((buf + 6 + (entry * IEntrySize) + 28)); The address is eight bytes, not two. The offset should be 34, not 28. With that fixed, arena arenas0.140, start 212543899, size 300 WE HAVE THE DATA!!!!! size: 300 could not read block: read asked for 4091f8b8aafc7b8a815f38534b70a171e4ae3e44 got 746311957a8fafc724bc122884778f97d4a5c769 That's a different wrong hash, at least? and now the size is correct! Either the address is wrong, the conversion is wrong, or the way it's being read/sent is wrong, I think. tester update: u64int aa; Arena *arena = amapitoa(index, addr.addr, &aa); if(arena) print("mapped to arena addr %llud\n", aa); mapped to arena addr 213573204 On our side, we've got: arena arenas0.140, start 212543899, size 300 That address looks wrong? Let's expand the tester a bit: Clump cl; u8int readscore[20]; ZBlock *zb = loadclump(arena, aa, addr.blocks, &cl, readscore, 1); if(zb) print("read clump from disk: size %lud\n", cl.info.uncsize); read clump from disk: size 300 okay. let's double check that it is the offset that's the problem by hardcoding it and seeing what happens. venti/read: could not read block: read asked for 4091f8b8aafc7b8a815f38534b70a171e4ae3e44 got f8e0d5ed942a9fe3c234569b14b18de0536cdd61 We just get a different wrong hash >_> Okay, that was maybe a silly approach. let's figure out _why_ the addresses differ... First, let's double check our pre-mapping addresses. tester gives > found score in index cache at 75374255700, size 300, type 0, blocks 1 75374255700. we've got > found base addr 75373226395 whoops. Our index entry is wrong, then? Let's double check the bucket, and where in it we are... tester gives offset 38 for the bucket. we've got... found in bucket at offset 0 ...huh. Oops? One entry is 38 bytes; we're finding it as the first entry in the bucket, while venti/srv finds it as the second? Ahhhh, the type is probably wrong. The hash for both entries is the same? Looks like the same data was written with multiple types?? ...ohhhh, the clump on disk isn't just the raw bytes! Oops! It'll probably work without checking the type if I fix the disk read? Yeaaaaaaah, loadclump: • Reads blocks, not bytes • calls unpackclump to actually get the data out • uncompresses if necessary We can't ignore any of that. The clump encoding may need more than the listed size. ...we can try something stupid, though. The clump on disk consists of: 4 bytes of magic 1 byte of type 2 bytes for size 2 bytes for uncompressed size 20 bytes for the score 1 byte for the encoding (compression info) 4 bytes for the creator 4 bytes for the time and then the data For an uncompressed block, we shooooould be able to just read 38 bytes plus the data, and skip the first 38 bytes? Didn't work; still had the wrong hash. Hopefully that means it's compressed? char clump[38]; vtreadarena(arena, addr, clump, &size); print("clump encoding: %d\n", clump[29]); Didn't even need the tester for this; the encoding is 2, which is - according to venti/srv/dat.h - ClumpECompress. It's compressed and needs unwhacking. Let's just copy unwhack in wholesale. That's ~200 lines of code, and there's no point reimplementing _that_ or, bluntly, understanding it. it fucking wooooorks! static int readclump(uchar *dst, VtArena arena, u64int addr, u8int blocks) { uchar *buf = malloc(blocks << ABlockLog); u32int magic; u16int size; size = blocks<<ABlockLog; vtreadarena(arena, addr, buf, &size); size = U16GET(buf+7); if(buf[29] == 2){ // Needs decompression Unwhack uw; unwhackinit(&uw); if(unwhack(&uw, dst, size, buf+38, U16GET(buf+5)) != size) sysfatal("decompression failed: %s", uw.err); } else if(buf[29] == 1) memcpy(dst, buf+38, size); free(buf); return 1; } <date Sun Dec 24 19:09:00 EST 2023 Okay! Everything works and has been partly cleaned up and refactored. Needs more of that, but getting the write path working is a higher priority right now. Probably going to want to buffer writes, and actually handle Tsync as a flush call. If I'm going to want a server-side cache as well, I should probably have them unified, and have a separate set of dirty entries? The question is, what layer do I put the cache at? I could do a disk-level cache, which tracks data by address on disk, with a dirty bit on each entry - this would be a singular cache which handles both index and arena data. IIRC this would be akin to venti's dcache, and I'm not sure I want that. The index cache is the most useful, though if it does not get flushed, we need to do a scan on startup to populate it the same way venti does, and this has been one of the behaviors that has irked me. Blocking until all data is available is reasonable in this situation, but getting *into* that situation is kinda crazy. One of the major annoyances with original venti is how poorly it utilizes the storage disk. While writing data, it will take a minute before it starts flushing _any_ of it out. We should be trying to write data more immediately than that. On the other hand, always writing immediately may not be ideal? but eh, I'd rather try anyways and then see how it works out. Venti is WORM, so normal patterns fall into two general categories in my experience: • File system running on top - This flushes periodically - Most of the time, it will be totally idle - When it writes, it writes all data at once • One-time upload of data - Archives, vacballs - This should also just fire data at us full-speed In both situations, we should expect to have a constant stream of data when we're writing. We have two streams of data to write: • The log • This is written linearly, chunk after chunk after chunk • Each chunk has a checksum, but calculating those is trivial • The index • This writes to totally random parts of the disk • _scatter storage addressing_, after all! Buffering the log might be reasonable, since it allows for writing larger blocks at a time. but... why speculate when I can measure? Start by just writing data the instant it comes in. If that's not fast enough, look into using multiple procs for flushing data. If that's not fast enough, look into other options. Overengineering is Bad™. Just double checked venti and it looks like it has one writer per partition, so it shouldn't be terribly hard to beat its performance without even resorting to multithreading - just by having a better implementation of the single-threaded writing. Caching / prefetching / some such system will definitely be needed to win on the read side - or, well, *probably* be needed. Parallelized readers might be sufficient, though that may require changes to libventi / vacfs / etc to enable multiple reads in flight at a time. But again, measure before designing. Worst case, neoventi's prototype has demonstrated that I _can_ do this, even if I have to change the design a bunch more. And, I kinda want to do the final implementation in Hare anyways. I'll need to set up a new neoventi for write testing, because testing it on the root drive would be moronic - I'd have to boot the "real" venti read-only for that, and I've no idea how fossil would handle that. That makes this as good a time as any to actually implement the config handling... # Mon Dec 25 10:08:18 EST 2023 Actually, ended up focusing on the refactor / cleanup, still. That's sufficient for now, though. And, took a break to fix album art loading in zuke :P which broke during a botched merge. I should probably also take a break to implement a neoventi formatter, and disk structure checkers, to replace the venti tools. I can test those against existing venti, and it'll be useful for setting up the test system instead of relying on venti's tools. I also need to take a pause, and reread the original venti paper, in full, before doing so. That will be important for improving neoventi design. Having a better understanding of what I'm replacing is important, and will be critical for writing the paper. TASK(fossil): neofossil should use spare blocks as cache. After a daily sync, my used blocks is 0, which is ridiculous - I should not need to reach out to venti to access e.g. /amd64/bin/rc! Mon Dec 25 15:24:36 EST 2023: Turns out that this is intentional, per the venti paper. "Venti is as fast as disk anyways." Given that their own numbers show it as, at BEST, half as fast, this is ridiculous. I want to make it true, but it's *not* currently true, and it's silly to waste network bandwidth when cache space is so cheap anyways. TASK(paper): look at more modern competitors than WAFL/AFS/etc. Look into gefs and ZFS, in particular. TASK(neoventi): investigate the possibility of storing large blocks, and e.g. streaming media files directly out of venti. (Also useful for e.g. torrents?) TASK(neoventi, research): multi-level blocks? e.g. reserve space for a torrent as a giant block, and _also_ for its individual blocks, pointing to the same storage on disk? TASK(neofossil, research): look into gear chunking more thoroughly. Wanna understand the math on this one :) TASK(neofossil, research): need to look into improving security, as well. Actually implementing authentication is a good starting point. Need to shut off as the train is arriving now, but I'm on page 4. Finished reading the paper; got a few interesting things from it I missed previously, but it's mostly unhelpful. DESIGN(neoventi): need to figure out what to replace the type with / how better to serve the "look up root scores" use case in a secure manner. Per-user lists in read-write storage, with a 'public' bit on each entry, might be acceptable? Going to implement the checkers _before_ the formatter. That'll ensure I have a total awareness of the disk format when I go to write the formatter. Checkarenas first. venti/checkarenas operates on a _singular arena partition_ as follows: • initarenapart • for arena ∈ arenapart: • if verbose, print arena info • if scanning the entire arena: • For clump ∈ arena: • Verify the clump's magic • If the clump's magic is wrong, and we're set to fix, assume that there was a failed write, and mark the clump as free? • TODO(venti/check, correctness): what effect does this have if there's a corrupt clump in the middle of correct data in the active arena? Would we end up appending to it? • TODO(venti/check, correctness): what happens if we run this on a sealed arena that has been corrupted in the middle? • Load the clump's data from the log • Verify the clump's score • If the score is wrong _and the encoding is none_, assume that the block was only partially written, and ignore it. • TODO(venti/check, correctness): check if this assumption holds - and, also, what to expect if we crash in various places. • detect invalid types • Mark them as corrupt, if set to fix • Verify that the header is correct • If the memstats and diskstats are out of sync, replace the diskstats. • TODO: check why this is needed. Seems likely that this is to recover from when we think we wrote data but it didn't get synced properly? That shouldn't be possible, though, since we should not update stats before writing out the blocks they refer to, presumably? venti is crazy enough that it wouldn't surprise me if it violates that requirement though. So what I actually want for my replacement, then: • For arena ∈ arenapart: • For clump ∈ arena: • Read clump from log, decompressing if necessary • Read clump info from arena directory trailer • Cross-reference clump • Validate score • If compressed: • Recompress, double check result • Verify the header • Verify the trailer • If sealed, verify the arena's checksum TASK(neoventi, optimization, unimportant): optimize arena partition loading? Takes like 600ms right now, which is hardly terrible, but it can probably be better. neoventi/checkarenas is reading the wrong address. While investigating venti/checkarenas behavior, I noticed it was reading the same address repeatedly (and hitting the disk cache on most of them), and assumed it was doing something wrong. clumpmagic clump 0, aa 0 getting dblock from c4000 preading c4000 loadclump getting dblock from c4000 found c4000 in cache readclumpinfo preading 200be000 clumpmagic clump 1, aa 4e getting dblock from c4000 It is, but not as much so as I thought. The double-reads are avoidable, but it's actually caching entire 8KiB sections?, so all the smaller blocks that fit in get prefetched at once. Actually, it's doing something else _worse_ than I thought. clumpmagic clump 15, aa 6ca loadclump getting dblock from c4000 found c4000 in cache getting dblock from c6000 preading c6000 clumpmagic clump 16, aa 26f0 loadclump getting dblock from c6000 found c6000 in cache getting dblock from c8000 preading c8000 clumpmagic clump 17, aa 4716 loadclump getting dblock from c8000 found c8000 in cache getting dblock from c8000 found c8000 in cache getting dblock from ca000 preading ca000 readclumpinfo found 200be000 in cache clumpmagic clump 18, aa 673c loadclump getting dblock from ca000 found ca000 in cache getting dblock from ca000 found ca000 in cache getting dblock from cc000 preading cc000 readclumpinfo found 200be000 in cache clumpmagic clump 19, aa 8762 loadclump getting dblock from cc000 found cc000 in cache getting dblock from cc000 found cc000 in cache getting dblock from ce000 preading ce000 readclumpinfo found 200be000 in cache clumpmagic clump 20, aa a788 loadclump getting dblock from ce000 found ce000 in cache getting dblock from ce000 found ce000 in cache getting dblock from d0000 preading d0000 readclumpinfo found 200be000 in cache It is not maintaining alignment _at all_. This is terrible for performance, I think? A single block read should only need to read from one sector, not two? Should probably test this. Regardless, this is unimportant for the read path, and should only impact how I design the write path. The important question, though, is why we're failing to pread. ...fd is bad. Jeez. The address might be actually right?? What the heck... ... oh for fucks sake. for(int i = numarenas; i >= 0; i -=1) EXCLUSIVE BOUND. NOT INCLUSIVE BOUND. Whoops. ...My approach here has been completely wrong. I should not be blindly reimplementing venti/checkarenas on top of neoventi's code base at all. I should be understanding the disk format, and then writing a from-scratch corruption checker. For now, I should probably take another glance at the venti paper, and at my actual venti, and get a concrete spec for what the current disk format is. ...the paper isn't nearly complete enough, though. To /sys/src, of course. The config file - which may be an actual file, if not inside of venti, so let's assume that for simplicity - specifies the roots. arenas /dev/kaladin/arenas isect /dev/kaladin/isect arenas entries are loaded as ArenaParts in venti, which is done via initarenapart on a Part, which is just an abstraction over an opened file, effectively (there's a minor caveat that partitions in venti can actually be _parts of a file_, and not an entire file - i.e. a single disk can be partitioned _by venti_, and each part tracks the offset and size into the underlying fd). Similarly, the index section is loaded as an ISect by initisect. Each partition should have 256KiB of unused space at the beginning. Let's verify that in practice... The space is not wiped by formatting, and is not truly _blank_. The config file exists at the very end of this space, in fact! After that, arena partitions have a 16 byte header, consisting of four four-byte big-endian fields: the magic number 0xa9e4a5e7, Fields are: u32 magic u32 version u32 blocksize u32 arenabase The magic is 0xa9e4a5e7U. % dd -if /dev/kaladin/arenas -bs 1024 -skip 256 -count 1 | xd 1+0 records in 1+0 records out 0000000 a9e4a5e7 00000003 00002000 000c2000 0000010 00000000 00000000 00000000 00000000 As can be seen from the venti I'm typing this on, the current arenaparts version is 3. My block size is set to 8KiB, and the first arenabase is 0xc2000. I'm not certain what the arena base is; likely the offset of the first arena, but relative to the partition? The header? There's also a table which lists all arenas in the partition, which appears to be aligned to the first block after the header. ap->tabbase = (PartBlank + HeadSize + ap->blocksize - 1) & ~(ap->blocksize - 1); The table is required to be _before_ the arenabase. As tabbase is relative to the partition, the arenabase appears to be as well. 0xc2000 this shows the arena base as 776KiB into the file; let's test that. Arena header magic is 0xd15c4eadU. % dd -if /dev/kaladin/arenas -bs 1024 -skip 776 -count 1 | xd 1+0 records in 1+0 records out 0000000 d15c4ead 00000005 6172656e 6173302e ... Okay, so there's definitely an arena there. What's the use of arenabase, though, if all arenas and their offsets are listed in the table? For recovery in case the table is corrupt? Ahh, no, it's really meant to mark the _end of the table_. The arena partition is just a set of arenas with a table to allow quick lookups by name, apparently? The arena table is... textual, of course. Cool. It should be easy to find it; 256KiB into the file, plus 512 bytes, then round up to the next 8KiB block; that's just 264KiB in, and - since the arena base is 776KiB in - it should be exactly 512KiB? Suspiciously exact, but let's check. % dd -if /dev/kaladin/arenas -bs 1024 -skip 264 -count 512 >/tmp/table While /tmp/table is 512KiB, most of it is NULL. The partitioner appears to round up to 512KiB no matter what. The table itself is formatted simply enough. It's a u32 which lists the number of entries, followed by that number of entries, each consisting of a textual name, a tab character, a u64 start, a tab, a u64 stop, and a newline delimiter. % cat /tmp/table 1416 arenas0.0 794624 537665536 The first entry is listed as starting at 794624, which happens to be exactly 776KiB into the file, exactly equal to the arenabase - this confirms that addresses in the map are, in fact, direct offsets into the partition. I'm going to define an ArenaParts version=4 that uses a binary table instead, but that's a later project. TODO. Arenas contain both a header and a trailer. Both occupy one block. NOTE: the usage of blocks as a unit for fundamental structures imposes a limit on the block size. The header and trailer _both_ contain some of the information, for redundancy. The trailer structure is as follows: u32 magic = 0xf2a14eadU u32 version = (4|5) [64]u8 arena name Disk stats: u32 clumps u32 cclumps u32 ctime u32 wtime ?version == 5: u32 clumpmagic u64 used u64 uncompressed size bool sealed bool has_memstats ?has_memstats: Memory stats u32 clumps u32 cclumps u64 used u64 uncsize bool sealed Arenas are indexed live. The disk stats represents what is committed fully to disk: i.e., what the index sees. The memory stats reveal clumps that are in the arena, but not in the index. The arena header structure is as follows: u32 magic = 0xd15c4eadU u32 version [64]u8 arena name u32 blocksize u64 size ?version==5: u32 clumpmagic The first arena begins at 794624, which is 97 blocks in. The table ends at the end of block 95. % dd -if /dev/kaladin/arenas -bs 8192 -skip 96 -count 1 > /tmp/foo Block 96 is totally empty. This may be intentional, to prevent an overwrite? Block 97, on the other hand, contains the arena header, as expected - followed by a lot of empty space. 0000000 d15c4ead 00000005 6172656e 6173302e 0000010 30000000 00000000 00000000 00000000 0000020 00000000 00000000 00000000 00000000 0000030 00000000 00000000 00000000 00000000 0000040 00000000 00000000 00002000 00000000 0000050 20000000 05228fa3 00000000 00000000 All the space after that is empty, and reading it is pointless. dat.h:114: ArenaSize4 = 2 * U64Size + 6 * U32Size + ANameSize + U8Size, dat.h:115: ArenaSize5 = ArenaSize4 + U32Size, dat.h:116: ArenaSize5a = ArenaSize5 + 2 * U8Size + 2 * U32Size + 2 * U64Size, Could just be reading 1 512-byte sectore instead of 16 of them >_> operating at the block level could make some sense, but the readpart call specifies the block size explicitly. Specifying just the arena header size would likely be better >_> Anywho, it's pretty clear that this arena is version 5, named `arenas0.0', has block size 0x2000 (8KiB), size is 1/8th of 4GiB => 512MiB, and the clump magic is 05228fa3. After this block should just be clumps? % dd -if /dev/kaladin/arenas -bs 8192 -skip 98 -count 1 > /tmp/foo 0000000 05228fa3 02002800 507b589d 11008702 Next block certainly begins with the clump magic! % xd /tmp/foo | grep 05228fa3 0000000 05228fa3 02002800 507b589d 11008702 00000b0 841026d0 05228fa3 02004100 50b52af7 0000180 05228fa3 02006100 f0561e10 bb976037 0000400 05228fa3 01004601 2cf747e8 0c4f54aa 0000460 abf77a54 c018482b db007048 05228fa3 0000500 83e22783 e02fc928 09fce8a0 05228fa3 But it also contains that signature multiple times, and I'm pretty sure that those are all legitimately different clumps. A single block can contain multiple clumps. Can a clump cross blocks, though? % dd -if /dev/kaladin/arenas -bs 8192 -skip 99 -count 1 > /tmp/foo % xd /tmp/foo 0000000 9bcaea2e c32578e8 de947f10 d9dcc43c Yep. Lovely. TODO: we should _try_ to pack clumps efficiently, and buffer them to do so, if needed. Having a read-write buffer area in venti would be useful for other purposes as well (e.g. score management). As far as reading goes, though, I see why venti requires a block cache, and operates at the block level. Going to add one to neoventi now. I want a unified cache for neoventi - for the index, and the arenas. For disk blocks, I want a map of (arena, block index) to (data block). For index, I want a map of (score) to (index entry). What does venti use for its caches? Venti just uses the disk address as the cache key, and has the partition pointer in the cache entry for confirmation - so, if two partitions happen to have a block at the same offset, both will go in the same bucket, and one will simply be skipped during lookups. The block size means that we've got at least a few bits we can use for the fd in the cache key, instead of needing to put the fd in the cache entry. Need two things for a good cache: a good hash table to back it up, and a good eviction algorithm. The fifty tabs of research I've got open suggests that S3-FIFO is probably the best eviction algorithm, but I need to figure out the hash table first. I'll probably want to take some inspiration from CLHT (the cache-line hash table) for optimal throughput, but I need to do more research and design. One option for modification is to store keys and values in separate cache lines; this worsens best-case / common time from one cache line to two, but in exchange it should significantly increase what proportion of accesses hit the best-case time, and reduce space overhead. It's probably reasonable to just implement a CLHT-like, write some benchmarking code, and then worry about improving it later. A CLHT-alike should be more than good enough, and far better than the shit that original venti is doing anyways. So, short-term: we want 64 bytes per bucket. The key must contain the block index as well as the partition that the block is from. If we require a minimum block size of 256 bytes, which seems reasonable, we can use 8 bytes for the key: 7 for the block index, one for the partition index. The value for the disk cache should be a block; it's reasonable to use a pointer for now. That means 16 bytes per key-value. We need room for a lock, though. Our libc uses an int for that, so we only strictly need 4 bytes. That still leaves inadequate room for a full fourth entry, short of changing the key or value format. I could use some of that room for chaining, as CLHT does, but I think linear probing should be better. The other option is to try to reduce the key or value size. If we require one byte per key for the fd, can we reasonably reduce the block index to less than seven bytes? The "obvious" answer of three bytes would give a mere 4096 block indices, nowhere near sufficient. Can the value be reduced? Well, the cache size is static. If we had, say, 32GiB of block cache at 8KiB per entry, that'd be 32*1024^3 / 8*1024 = 4*1024*1024 = 4M entries, which requires 21 bits for indices, or three bytes! Using three full bytes would be fine up to ~128GiB of disk cache, and I'm more than happy setting that as a limit and optimizing for realistic hardware. ...even though my PC could realistically have more than 128GiB if I paid for more RAM >_> using four bytes gives nicer math, so what if we did that? 8 bytes per key + 4 for value index = 12 bytes per entry; 5 entries + an int for locking gives us our nice bucket size of 64 bytes :D But! That would make this cache specific to the disk blocks! Could it be reused more generally, or would it need to be a u64->u32 map? Well, actually, would a u64->u32 map be sufficient for the other caches? For the index cache, in particular? It should be fine for clumps; if we use the on-disk address for clumps, we can just do the same trick for that map as for the disk blocks. For the index, though, keys are 20 bytes. A dedicated cache with key-buckets and value-buckets might work better there. Three keys plus a lock for the key buckets, 15 values plus a lock for value buckets? Could work decently well. Having the disk cache should be enough to make a huge difference regardless, since it'd mean not needing any I/O at all for redundant index accesses, even though it'd mean redundant memory processing - but with a really good disk cache, that might be enough to catch up to venti on some of our tests anyways >_> So! Short order of business is a u64→u32 map. Buckets contain a Lock, and five keys and five values. Keys are (block_index<<8)|(fd&0xff); and we assert that no fds are greater than 0xff; values are indices into the raw disk cache buffer, and refer to [blocksize]-sized entries, not bytes. typedef struct { Lock; // index into raw cache u32int value[5]; // block_index<<8 | fd; assert(fd < 0xff); u64int key[5]; } bucket_t; The cache is managed as one giant array of buckets; if an entry is missing in a given bucket, we scan forwards to the next bucket. If it's missing from that bucket, we repeat until we find either an empty slot or the correct entry. We'll start with a hardcoded table size of 1GiB, and hardcoded block size of 8KiB. This gives us 1024^3 / 8*1024 = 128K entries. With five per bucket, that means ~26K buckets, or ~1.6MiB of overhead. ...sizeof(Lock) in libc is actually eight. We can just copy lock.c and #pragma pack it, I guess. _tas won't care, probably? Unless it wants alignment? Which would make sense, and the lock should probably be aligned for maximum performance anyways. Could reduce entry size to four, I guess? Or use three bytes per value? Or try to reduce the key size... Given 32TiB of storage, and 8K block size, maximum block index is 32*1024^4 / 8*1024 = 4*1024^3 = 4GiB = 4 bytes. We... can probably reduce key size to a u32, too? That would make us a u32→u32 map, and need 8 bytes per entry. It'd mean seven entries per bucket, too. That should work? Ah! Except we need a byte for the fd, still. We could do nine bytes per entry, and still get six entries total? typedef struct { Lock; // index into raw cache u32int value[6]; u32int index[6]; u8int fd[6]; } bucket_t; It'd mean two wasted bytes per bucket, though. Three bytes for the value would give a limit of 128G of cache, but that's probably fine. Three bytes value + one byte fd + four bytes index = 8 bytes per entry again. Lots of finagling, and lower limits, but more than adequate. typedef struct { Lock; // index into raw cache u32int index[7]; // 24 bits of value index plus 8 bits of fd u32int value[7]; } bucket_t; When caching disk blocks, we don't want to rely on the score, do we? Probably saner to use the block index as the key there for bucket selection, too; a block will contain multiple scores (clumps / index entries) anyways. Fibonnaci hashing on the (index|fd) ought to be sufficient for spreading entries among buckets. Can probably clean up that bucket format a bit, though, but lack of u24int makes it harder :/ We also need a free list to track available slots in the main data bucket. Instead of a linked list, we can make this a large statically-sized stack; the map size can never change. Given 128*1024 entries, we need 128*1024 slots on the stack, maximum, plus a current length, and each slot needs merely to hold a slot index, so a u32 is fine. This will use 512KiB; it could be reduced if we reduced entry count to 64*1024, but that's not a limitation I'm eager to enforce. To insert, we need to do a few things: - Pick a bucket - Lock the bucket - Check if it already contains the key - If so, update the value, and evict the old cache entry - Otherwise, grab an empty slot, and set the key - Lock the free list - Grab an empty cache entry from the free list - Remove the entry from the free list - Unlock the free list - Copy the data into that entry - Set the value of the slot to that cache entry Buckets are picked via fibonnaci hash of the (index<<8|fd). In order to be able to test this, I need to adapt the disk reading functions to operate in blocks - which they should already be doing, but were not since I misunderstood venti's operation due to rushing through things. Currently, to read blocks, we allocate space for blocks, read the data into them, then grab the data we want and store it in the destination buffer. Ideally, we'd have the cache lookup in there, such that we either grab the data from cache or reserve a cache entry, read directly into the cache entries, and then extract from the cache entry into the dbuf. For now, going to just extract the block reading, and have it insert into cache... That was easy enough. Going to use the zero index to indicate an absent entry; the first block in a given partition never has important data anyways :) Quickly changing the API to match intended usage, as well. cachemapinsert should take the fd, the block index, and a cache entry. We should then have a cachegrab(fd, index) that yields a buffer and an indicator about whether it is free or needs to be read into. Usage would look like: char *buf; if(cachegrab(&buf, fd, index)){ // data is in buf } else { // read data into buf, and call cacheinsertmap(fd, index, buf) } Ah. Right. Cache insertion needs not the buffer but the _index_. Could do math to get it back but that seems silly. So... cachegrab should yield a cache index? Also, it'd be nice to be able to use the memory directly within the cache without needing to copy to a destination buffer. Since some chunks can cross blocks, that'd require a way to use multiple sequential blocks in the cache at once, and to ensure they will not be evicted before we're done with them. The latter is important anyways, and a refcount per entry is likely the way to go. The former requirement is a lot harder. And, to e.g. handle a read request, we'll need to copy from the cache into the destination buffer anyways. So, easier to design the interface such that you can grab a block, read it straight into the cache, then do a _single_ copy into the destination buffer, and repeat for other needed blocks. Oh! And cachegrab can also set up the cache entry, so that clock reading could look something like: u32int cacheindex; if(cachelookup(&cacheindex, fd, blockindex)) return; pread(fd, cachebuf(cacheindex), ...; return cacheindex; And that's it. Cachelookup can, on a miss, grab a spot from the free list / trigger an eviction if needed. The caller can then cacheunref(cacheindex) when done with the entry. The one downside of reading straight into cache is that under extreme cache pressure, reads will become blocked by the rate of cache evictions - that is, if the cache is so small that the set of active reads cannot fit into it, there will be a major limit to throughput. At sizes just above that, the cache's utility will be negligible as most entries will be evicted quickly. The upside is that it reduces memory requirements, and it ought to be trivial to ensure that never happens. 1GiB is enough for >100K entries, which is far more than sufficient for purpose. Even just a few megs ought to be plenty to avoid total thrashing. We also need write locking on reads into the cache. Given two concurrent reads to the same data, one will read from disk into cache, both should yield the same cache index, and the memory-reader must block on the disk-reader to finish. A spin lock, held when yielding only to writers, should be sufficient, and require just eight bytes per entry - 1MiB, given a 1GiB cache. On the other hand, this is bumping the complexity / cost of avoiding copying, so it might be better to just...not? Holding the bucket lock while inserting would be... maybe fine, actually? u32int cacheindex; if(cachelookup(&cacheindex, fd, blockindex)) return; pread(fd, cachebuf(cacheindex), ...; // unlocks the bucket cachewritten(fd, blockindex); return cachebuf(cacheindex); ...except, we no longer need the cache index, since we no longer are responsible for inserting it! char *buf; if(!cachelookup(&buf, fd, index)){ pread(fd, buf, ...); cachewritten(fd, index); } return buf; We don't even need any sync on the other reader, since one of the parallel cachelookup calls will block on the other thread reading the data :) We still need a way to track buffer references, though. We can stuff a secret eight bytes before each buffer, and then just atomically increment it? For now, lacking atomics, we can toss a lock and adjust it under the lock. Entries can only be reclaimed when they have no active readers. Ooh, one option for tracking readers without too much locking is to have a list of items being read. Could be designed to fit pretty cmpactly, and should never be huge, since we don't expect to have more than a dozen active readers at once. Might be easier to manage. ...short term, can just lock the bucket during a read, too? That doesn't help for evictions, though? Ah, sure it does; pick a random bucket, try to lock it, advance until we find a bucket we can lock, evict its first entry >_> Suboptimal, but might work. and, having cachedone require the fd and block index instead of the buffer is not great. With the cache disabled as follows, I measured time to walk the fossil root I've been using for testing. char* vtreadarenablock(VtArena *arena, u32int blockindex) { char *buf = malloc(8192); // if(!cachelookup(&buf, arena->fd, blockindex)){ if(pread(arena->fd, buf, arena->blocksize, arena->base+(blockindex*arena->blocksize)) != arena->blocksize) return nil; //} return buf; } It took 29.96s. With the cache enabled, I get a compression error >_> Lots of pain and a month later, figured out the issue (the pain wasn't related to neoventi, mostly, just depression and grief. YAYYY!) Short story, the assumptions were wrong. the fd was useless as part of the key, because all the arenas are in the same file, and the block index is _arena-local_. Drop the fd, use a bespoke cache-specific arena index with the block index, and voila, the key goes from five bytes to four bytes and everything works! (With cachedone() function modified to linearly probe.) Bonus, since the value is three bytes, this takes entry size from eight to seven bytes; the 56 bytes are now enough for eight entries instead of seven :D (and, if the lock is reduced from eight bytes to one, we can trivially add a ninth entry per cache line!) This reduces the _cold_ time for walk /n/vac from ~30s to ~23s, and warm time from ~30s to ~19s. Profiling indicates that >70% of time is still spent in read() - including on the warm run when all data is in cache! - which heavily implies that the index access is the current bottleneck. So, we need to stuff the index entries into the disk cache as well as a first step. First idea is to give index sections a cache index as with arenas - there's definitely room in the address space for that! - and then use the index bucket as the block index (which it is!) Problem with that is that, unlike with arenas, where 16 bits is enough for the block index, a typical venti has a single large index section, not many small ones, and so we can not fit the bucket into 16 bits. A few approachs are obvious: - Split the index section on disk. Adjust the formatter to default to generating many smaller sections. - This requires regenerating the index, and breaks compatibility. hard pass. - _Fake_ splitting the index section. Generate many index sections in memory that all correspond to the same real index section, each with a subset of buckets. - This results in roughly one index section per 10GiB of storage. For 32TiB, as we specced out for the maximum storage in arenas that can fit in the cache as is, we'd need ~3200 index sections. This won't pollute the cachekey space at all. (For comparison, for arenas, we definitionally need ~1 entry per 512MiB!) - This can be approximated by faking the key space. Each index section is given a _base_ cache index, and its range is calculated (so that future sections can be added with a higher base). For my system, I'd need ~75 fake index sections. - This can be accomplished by splitting the bucket into the lower sixteen bits (simulated block index), and then at _most_ 12 bits are needed for the index base offset _and that's if 32TiB are represented in the singular index section!_ - This seems like a simple enough solution. Did that, and now it's 44s cold and 27s hot. No access to disk is needed on subsequent walks, but it's still slower. Going to do a dozen warm runs and then check the profile... Oh! and that's 44s/27s _with profiling enabled!!!_ That's kinda bullshit. regardless, down to 41s/24s by reducing how long locks are held. Need to revisit the cache locking, because this is still not great. ...it also implies there _is_ some lock contention, somehow??? Or, perhaps, it's the lack of linear probing after getting a cache entry (prevents some cache misses and computation). 35.5% in read(), with 1.2M calls?? That's odd, there shouldn't be that many, I think? 27.5% in cachelookup, 15% in lock, 7% in unlock, and 6% in cachedone. That's ~55% of cache lookup overhead. 3.2 _billion_ calls to lock and unlock. Ah right. those read calls are network related :P They're 1/3rd of execution time, so might be worth looking into fixing that up (/ pipelining: start reading next request while handling current one! TODO!), but for now, the cache problem is much more severe. With locking outright removed (We're single threaded for now anyways!!), cold is down to 28s _with profiling!_, and 11.5 warm _WITH PROFILING_!!! Down to 27.15s/10.8s by removing some hot allocation. Venti can do it in ~8s warm. Can we beat that without parallelizing more? ... 74.6% of time is spent on read. A bit of that was the from-disk, but most is network; let's call it ~70% network overhead. 15% is the cachelookup function. 8% is strcmp+arenafromindex. 1% is processing index entries from the bucket; adding an index cache would have minimal effect on this workload, at least. .6% is decompression. Everything else is trivial. strcmp can probably be avoided. What are we using that for that there's 1.2B calls to it in the profile?? Mapping arenas from name to index, apparently. That's silly. We should just have the arena map entry contain the VtArena*. Then we only need to do the indexing at startup. That brings us down to 24s cold (WITH PROFILING) and 7.74s warm! venti is BEAT! Without profiling, 24.4s cold, and... 8s warm??? the hell? meh. close enough. It's plausible that the profiling is actually faster here for some reason lol Interestingly, the profile has changed a bit: 62% read, 31% cachelookup, 3.3% vtreadlookup, 1.6% decompression, and then a long tail of who cares. Going to split cachelookup into pieces so I can see what's actually taking time. Lack of inling means that might acatually make things a tad slower, but the code will be cleaner so I don't really care. static bucket_t* getbucket(u32int key) { // Fibonacci hashing! return &buckets[key * 11400714819323198485LL >> BITS]; } static void put(bucket_t *bucket, int i, u32int key, u32int cacheindex) { bucket->index[i] = key; bucket->values[i*3] = cacheindex&0xFF; bucket->values[i*3+1] = (cacheindex>>8)&0xFF; bucket->values[i*3+2] = cacheindex>>16; } // Looks up a cache entry. If the entry is not present in cache, // this grabs a free buffer, inserts it into cache, and yields // its index so that the caller may fill it up. // Returns whether the data was already in cache. int cachelookup(char **buf, u16int arenaindex, u32int blockindex) { u32int cacheindex; u32int key = (arenaindex << 16) | blockindex; bucket_t *bucket = getbucket(key); int existed; while(bucket != bucketend){ for(int i = 0; i < NENTRIES; i += 1){ if(bucket->index[i] == 0){ // Key does not exist cacheindex = grabfree(); *buf = &data[cacheindex*8192]; put(bucket, i, key, cacheindex); return 0; } if(bucket->index[i] == key){ u32int cacheindex = bucket->values[i*3] | (bucket->values[i*3+1]<<8) | (bucket->values[i*3+2]<<16); *buf = &data[cacheindex * 8192]; return 1; } } bucket++; } //FIXME not unreachable //if the final bucket is full, we must handle it sysfatal("unreachable"); return 0; } 0.0 0.028 1102923 getbucket 0.0 0.000 41077 put 14.5 70.896 1102923 cachelookup Iiiiinteresting. Picking the bucket and inserting into it are trivial. Let's split this further... static void get(bucket_t *bucket, int i, char **buf) { u32int cacheindex = bucket->values[i*3] | (bucket->values[i*3+1]<<8) | (bucket->values[i*3+2]<<16); *buf = &data[cacheindex * 8192]; } static int trybucket(bucket_t *bucket, int *i, u32int key) { for(*i = 0; *i < NENTRIES; *i += 1){ if(bucket->index[*i] == key) return 1; if(bucket->index[*i] == 0) return 0; } return -1; } // Looks up a cache entry. If the entry is not present in cache, // this grabs a free buffer, inserts it into cache, and yields // its index so that the caller may fill it up. // Returns whether the data was already in cache. int cachelookup(char **buf, u16int arenaindex, u32int blockindex) { u32int cacheindex; int i; u32int key = (arenaindex << 16) | blockindex; bucket_t *bucket = getbucket(key); while(bucket != bucketend){ switch(trybucket(bucket, &i, key)){ case -1: // miss. linear probe. bucket++; continue; case 0: // found an empty slot cacheindex = grabfree(); *buf = &data[cacheindex*8192]; put(bucket, i, key, cacheindex); return 0; case 1: // already exists get(bucket, i, buf); return 1; } } //FIXME not unreachable //if the final bucket is full, we must handle it sysfatal("unreachable"); return 0; } I'll be surprised if this isn't noticably slower again. Ha, yeah. 32.1s cold, 15.75 warm. Yikes. Knew it would be slower but 2x slower on the warm path? should be a good profile, at least. 45.7 224.300 1368400 read 31.2 153.185 3565574884 trybucket 19.4 95.175 1427308 cachelookup 1.6 7.741 684185 vtreadlookup ...oh. This is probably the weakness of linear probing with insufficient entries; there's an average of 3000 bucket lookups needed to find a good slot. Oops. Also, BITS is set wrong. We're able to use all buckets, but only really _trying_ to use the first 255, because we're shifting over by 56. We have 64K entries, and we were reserving entries/4 buckets (since we had seven entries per bucket, and /7 would not give power of two bucket count!), so we should have had that set to 50. Changing that gives cold time of 20.7s even with the patches! Warm time is 4.39s. If I set entries to 0x40000, things break. That should be ~2GiB of memory; I'd think it was malloc related but I'm not using malloc, and the issue occurs later. Invalid address in a pread syscall... The buffer 0x7fffdfff is apparently invalid, for some reason? data is at 10ded90, ends a 410ded90. Regardless of whether it's valid, it shouldn't actually be returned anywhere. Weird... ...max index into data should be 0x40000 * 0x2000 is 2GiB. Are array indices of type int? If so, it might be wrapping >_> Ahhhhh, sbrk is failing to allocate 2GiB? Right. the _sbrk_ parameter was an int >_> (usize)0x2000 and now it works. Also, with the cache fixed so that linear probing is less bad: 90.0 187.229 1368400 read 4.7 9.844 684185 vtreadlookup 1.8 3.760 664759 unwhack 1.6 3.338 743123 vtreadarenablock 0.3 0.717 1 listen 0.3 0.673 1416 loadarena 0.2 0.377 766800 memcpy 0.1 0.280 684185 aindexfromaddr 0.1 0.266 1427308 cachelookup 0.1 0.148 684185 bucketlookup 0.1 0.143 684185 vtread 0.1 0.141 684187 vtrecv 0.1 0.133 1572426 memcmp 0.1 0.127 684185 vtreadarena Next up is to reduce the number of read()s. Can do this by reading multiple packets per call (read into a 64K buffer, track its length, grab packets from it, rehydrate as needed). Might be able to reduce that 187s to closer to 5s. There were more than 20 runs in that profile, so the result would likely be a total runtime of ~1s. Nooope. reading from the TCP file will only give us a packet at a time. This will at best reduce total time by ~45%, because it combines the 2-byte read with the read of the rest of the packet. Still, not bad. Okay, lol. From 23.68s cold and 4.45s warm to 20.69s cold and 4.35s warm. Time to grab another profile :/ ...the profile is treating time read is _blocked_ as time spent reading. Oi. Going to run neoventi and then run this as quickly as possible: mk vac; i=();while(! ~ $#i 50){time walk /n/vac > /dev/null; i=($i ())} ; slay vacfs | rc Should take around 4-5 minutes, and the profile might be a tad more accurate 92.7 1415.727 11089337 read We're blocked on the client :P Most of the runtime is waiting for vacfs to get back to us :D What the hell is wrong with the config?? It should be the last Maxconfig bytes of the first PartBlank! readifile() → readpart(PartBlank-Maxconfig, Maxconfig) → pread(fd, buf, Maxconfig, PartBlank-Maxconfig) dat.h:40: PartBlank = 256*1024, /* untouched section at beginning of partition */ Maxconfig = 8 * 1024, % venti/conf /dev/jasnah/arenas index main arenas /dev/jasnah/arenas isect /dev/jasnah/isect mem 256m bcmem 256m icmem 512m httpaddr tcp!*!13012 addr tcp!*!13011 % dd -if /dev/jasnah/arenas -bs 8192 -count 1 -skip 248 [garbage] % dd -if /dev/jasnah/arenas -bs 8192 -count 1 -iseek 248 [garbage] % dd -if /dev/jasnah/arenas -bs 1 -count 8192 -skip 253952 [garbage] I'm preeeetty sure I"m using dd right >_< Do I have the address wrong?? ...oh wait. oops. 248KiB in. not 248*8K in. dummy. % dd -if /dev/jasnah/arenas -bs 1024 -count 8 -skip 248 that works. lol. Okay, config parsing is good. Let's fix multiclient support. With a foreground proc for handling, I - oh. This is why testing is important! neoventi shuts down on disconnect >_< Ohhhhh on vthangup we're bouncing() with 2 as the parameter, which triggers exits(). That was easy lmao. annnnd with that fixed it crashes on hangup with one thread, too! 7.neoventi 55793: suicide: sys: trap: misaligned pc pc=0x5 mk: ./7.neoventi -c /dev/jasnah/arenas : exit status=7.neoventi 55793: ... huh. waiting for packet...Received 0 bytes! hanging up: unknown control request unknown control request is not a neoventi error, that's a driver error. Preeeetty sure it's just from the connection dropping, though. Probably. Maybe not. The trampoline appears to be going to the wrong place... the heck? inittrampoline calls setjmp(). The longjmp is correctly returning to that setjmp, but the return in inittrampoline is not returning to handleproc a second time?? Longjmp restores the environment saved by the last call of setjmp. It then causes execution to continue as if the call of setjmp had just returned with value val. The invoker of setjmp must not itself have returned in the interim. All accessible data have values as of the time longjmp was called. The invoker of setjmp must not itself have returned in the interim. ohhhh dumb. duuumb. manually inlining initrampoline() fixed that. Multithread?? Yep!!! :D Technically works. Ish. So long as only one actually does anything at a time >_< Cache not locking right, maybe? I don't think I ever restored bucket locking... Just going to lock around puts, since there's no eviction yet... annnnnd it hangs. lol. > locking bucket...locked. > bucket put, unlocking...done The only line that locks a bucket has that print before it, so... this isn't hanging on lock?? hanging in pread() after grabbing the block from the cache???? what the hell! yeah, print() debugging confirms. locking bucket...locked. bucket put, unlocking...done reading data into cache... the hell?! Okay, reverted and that's still happening, so... um. What?? Fully reverted everything and it's still going on. Confusion. Rebooting is a _really_ stupid potential "solution." This is an issue. I can read from the file using e.g. dd, so this _probably_ isn't e.g. a kernel issue where something is locked because i unplugged the RNDIS device while it was in use :P eughhhh tangentially, I need to make my acme scripts clean up when acme dies >_< makes looking through the process list much more annoying than it really needs to be. Good news is that it's still broken after reboot. Did something get accidentally added to a commit and break it? :/ 8e31146d45d0612b82a5438012e6ed03be3995a9 works. cool, let's bisect this. cdc922649bc14a2e25abd4593b244f36fffff1d0 does not. Okay, did ca929 break it? Nope, that one works. Um. cdc922649bc14a2e25abd4593b244f36fffff1d0 broke it?? git/export cdc922649bc14a2e25abd4593b244f36fffff1d0 From: Noam Preil <noam@pixelhero.dev> Date: Thu, 04 Jul 2024 21:07:18 +0000 Subject: [PATCH] server: handle each client in a separate proc --- diff ca929c334fedc59fc1240b2c64b887082ef331f2 cdc922649bc14a2e25abd4593b244f36fffff1d0 --- a/notebook +++ b/notebook @@ -1155,3 +1155,46 @@ % dd -if /dev/jasnah/arenas -bs 1024 -count 8 -skip 248 that works. lol. + +Okay, config parsing is good. Let's fix multiclient support. + +With a foreground proc for handling, I - oh. This is why testing is important! neoventi shuts down on disconnect >_< + +Ohhhhh on vthangup we're bouncing() with 2 as the parameter, which triggers exits(). That was easy lmao. + +annnnd with that fixed it crashes on hangup with one thread, too! + +7.neoventi 55793: suicide: sys: trap: misaligned pc pc=0x5 +mk: ./7.neoventi -c /dev/jasnah/arenas : exit status=7.neoventi 55793: + +... huh. + +waiting for packet...Received 0 bytes! +hanging up: unknown control request + +unknown control request is not a neoventi error, that's a driver error. Preeeetty sure it's just from the connection dropping, though. Probably. Maybe not. + +The trampoline appears to be going to the wrong place... the heck? + +inittrampoline calls setjmp(). The longjmp is correctly returning to that setjmp, but the return in inittrampoline is not returning to handleproc a second time?? + + Longjmp restores the environment saved by the last call of + setjmp. It then causes execution to continue as if the call + of setjmp had just returned with value val. The invoker of + setjmp must not itself have returned in the interim. All + accessible data have values as of the time longjmp was + called. +The invoker of + setjmp must not itself have returned in the interim. + +ohhhh dumb. duuumb. + +manually inlining initrampoline() fixed that. Multithread?? + +Yep!!! :D + +Technically works. Ish. So long as only one actually does anything at a time >_< + +Cache not locking right, maybe? + +Yep! --- a/server.c +++ b/server.c @@ -17,6 +17,7 @@ msg = vsmprint(msg, args); werrstr(msg); free(msg); + fprint(2, "error: %r\n"); va_end(args); if(tbuf != nil){ len = snprint(tbuf+6, 0x10000, "neoventi: %r"); @@ -171,38 +172,31 @@ } } -static int -inittrampoline(VtConn *conn) +static void +handleproc(void *fd) { - switch(setjmp(conn->bounce)){ + char buf[MaxPacketSize]; + VtConn conn; + conn.fd = (int)(usize)fd; + switch(setjmp(conn.bounce)){ case 0: - return 1; + vthello(conn, buf); + vtloop(conn, buf); + abort(); case 1: fprint(2, "abandoning client: %r\n"); + break; case 2: - exits(nil); - return 0; + fprint(2, "hanging up: %r\n"); + break; default: fprint(2, "internal error: unexpected bounce code\n"); - return 0; + break; } + close(conn.fd); } static void -handleproc(void *fd) -{ - char buf[MaxPacketSize]; - VtConn conn; - conn.fd = (int)(usize)fd; - if(!inittrampoline(&conn)){ - close(conn.fd); - return; - } - vthello(conn, buf); - vtloop(conn, buf); -} - -static void handle(int ctl, char *dir) { int fd = accept(ctl, dir); @@ -210,8 +204,7 @@ fprint(2, "failed to accept connection\n"); return; } - handleproc((void*)fd); -// proccreate(handleproc, (void*)fd, mainstacksize); + proccreate(handleproc, (void*)fd, mainstacksize); } void but, but that's just what it's supposed to be?!?! Dammit. Isolate this into smaller chunks :( % git/branch front % git/revert -c ca929c334fedc59fc1240b2c64b887082ef331f2 server.c % mk all; mk fg % mk vac Works. Okay. Um. % git/revert server.c % mk all; mk fg % mk vac Hangs. Yep. That confirms it. - handleproc((void*)fd); -// proccreate(handleproc, (void*)fd, mainstacksize); +// handleproc((void*)fd); + proccreate(handleproc, (void*)fd, mainstacksize); } That does it??? really?? but it worked fine!! Is it because I'm passing the conn - and thus the jmp_buf - around on the stack? Ohhh my gosh it's the stack size >_< I had it bumped to see if that fixed a different issue, then fixed that when committing >_< It needs to be bumped, though. Which makes sense; we've got two packet-sized buffers, and those are 64K each, so 128K isn't big enough. Let's bump mainstacksize to, I dunno, 256K should be plenty. Yep, that's good. Cool!