welcome to the 157 test page. the focus of the next client version is the complete reengineering of the search distribution network. wasn't this the focus in the last couple of versions too? yes, yes it was. but as successful as we were in bringing search functionality back from the dead, it is still far from ideal seeing as all the changes we've made to the search distribution network are more really good improvements rather than redesigns of something that was originally meant to work just well enough.

if you're at all curious about the specifics of what's going to change and how we're going to achieve that, please feel free to read on. if you're just interested in helping us out, just use the link below to download the test client. it can safely be run alongside the regular soulseek client and connects you to a secondary, much smaller test server. since much has changed under the soulseek client hood, no doubt we will encounter many bugs and problems big and small which means updated versions of the client will be posted here as soon as those are fixed, often on an hourly basis (yeah, we did this before).

soulseek client version 157 test 8

important note: a small percentage of soulseek users suffer a loss of configuration files whenever they upgrade to a new version. these files include buddy list information, unfinished downloads and all other user-generated information. if you've experienced this before or are just looking to stay on the safe side, make sure to copy all .cfg files from your soulseek installation folder (typically c:\program files\soulseek-test\) before upgrading. if configuration loss occurs, close the client if it's open, copy all .cfg files back into the soulseek installation folder and restart the client.

the technical side of it: the soulseek search distribution network is a simple hierarchy where clients are connected to other clients in child/parent relationships. the construction is done dynamically with the server constantly trying to push clients off of itself (only for the purpose of sending search requests) and on to other clients. the problem? other than a few simple sanity checks and the rigor with which the server performs said pushing-off, there is extremely little management going on as far as the overall network forms. enter DNet reports. the idea is very simple -- and it's that the client keeps the server informed of their place in the network at all times and as soon as the information makes itself apparent which is almost immediate to the generating events. so whereas the server we have now is pretty much blind to where a client is and what they are doing past the point of push-off, the one we're testing has enough information to graphically chart the layout of the distribution network at all times. the way this information is used is twofold: first, it lets us detect problems in the network formation protocol where earlier we had to do a lot more guesswork. secondly we're already using this information to limit the possible parents a client is exposed to before they've yet found one. we are making sure that the network doesn't get to deep so that search results arrive faster and more reliably. we are using it to divide the network into separate groups and group leaders with each one accounted for and prioritized. we're using it to profile actual arrival of each search request globally to dynamically adjust operation logic. it's all good, really.


June 03, 1:01pm: test server going online. let's see how long before the first crash.
June 03, 2:45pm: server crashed. took the opportunity to add a few code checks and log messages. crash reason still unclear, but a temporary workaround is in place.
June 03, 3:02pm: server restarted.
June 03, 4:49pm: things are far from working as they should, but the server seems to stay up and what wasn't fixed is today is at least down on paper. it's been a very long day, i'll resume work tomorrow.
June 04, 1:59pm: posted a 2nd test client. there seem to be too many violations of the maximum distribution hierarchy depth and my first guess is that the client's possible parents cache is a culprit. since the new distribution network is of a much more dynamic nature we would like for clients to use up possible parent lists as soon as they get them from the server, and then forget about them. exit possible parent cache.
June 04, 2:15pm: 2nd test client erroneously showing test 1 in about box. reposted.
June 07, 12:40pm: the distributed network seems to be functioning pretty well, although not yet what we're aiming for. one primary cause seems to be an intrinsic instability of connections between parents and children, and the prime suspect is a prone to inaccuracy timeout mechanism. this is corrected in the 3rd test version which is now posted.
June 09, 8:47pm: yes yes y'all, i screwed up the child depth relay code. here's a great example of why you shouldn't program when you're stoned. feeling geeky? ok then. one thing that's done on the new distributed network that's wasn't done before is that for each client we are now keeping track of what we call: child depth. when the client doesn't have any children, its child depth is 0. if the client has any number of children, its child depth is 1. unless they have any children of their own in which case it would be 2. unless those children have their own children... you get the idea. what child depth signifies is the number of levels of children a client has "under them". that information lets us make fairly sure that the distributed network as a whole doesn't grow past a certain depth, and that in turn helps give faster search results and more of them since ultimately there is less chance of failure along the way. this kind of depth information is collected from the very bottom of the network -- from childless children to parentmost parents each hop is automatically counted as one level of depth. this is what our problem was: without the proper intervening logic we made the client consider the word of the most recently talkative child over any of those that preceded. this is no good if we're already recorded to have one child of say, level depth 3 (thus making our own - 4) but then are told of a child that has a child depth of 0 (which incorrectly records our new depth as 1 when our actual depth hasn't really changed. silly? yes, but that's being overworked for ya. and this is what test 4 is all about fixing.
June 13, 9:56pm: 5th test version posted. more important fixes to the networking code.
July 29, 6:56pm: 6th test version posted including some minor fixes to annoying bugs such as the incorrect coloring of offline users in search results to appear as though they were online. though much more importantly, the new upload return system which lets you set up shares of your favorite stuff from which files are automatically returned to users you've downloaded from. not all the kinks are worked out as you may or may not notice the novel tendency to find downloads you've queued but never downloaded appearing as uploads from their respective owners, if you decide to open yourself up to returned files. if i ever go with the reengineering i'm planning for the queue manager that sort of annoyance should be resolved. once you run the new client you'll get a better explanation of this feature with some toggles you can take advantage of there and then to start using it almost right away.
July 29, 7:57pm: 6th test version was erroneously compiled from an older source tree, thankfully numbers are cheap. test 7 posted.
July 29, 9:02pm: upload return isn't working in the release build and i can't figure out why. so we're going back to test 5 until i can figure this out. sorry for any inconvenience.
Aug 04, 3:05pm: upload return seems to be working a lot better. let's give this another shot with test client 8.