SoulseekQt lowmem edition phase 1 - user shares

So I'm not entirely sure how much of a problem SoulseekQt memory consumption is for people out there, but it's something that comes up pretty regularly on the forum and I figured I'd try to push through the horror of trying to refactor some of the most sensitive, complicated parts of the client code and see how far I can get.

To recap, SoulseekQt takes a lot more memory than Soulseek NS, the Windows-only, basically original Soulseek client that's slowly been phased out over the last few years. Being peer-to-peer software that ends up doing a lot of different things under the hood, the Soulseek client is a pretty complicated piece of software. When I decided to create a new cross-platform client using the cross-platform Qt framework, I felt like I've hit some sort of complexity ceiling. It primarily had to do with the different kinds of data managed by the client and their interrelationships, so I designed a data system, a sort of super map that can accommodate all of the client's data and makes it very easy to manage and navigate. That made it a lot easier for me to implement the more advanced features that set SoulseekQt apart from the older Soulseek NS.

The downside is that it ends up consuming a lot more memory. This isn't a problem in most areas of functionality, but there are three in particular where the difference might be noticeable:

  1. When you browse users with large shares.
  2. When you share a lot of files yourself.
  3. When you keep a lot of search windows open.

Of those, the first one is probably that one that affects the most users. Search result windows are limited to 10,000 results each by default, which isn't that much data. And most users don't share hundreds of thousands of files, which is the point at which memory consumption becomes excessive. Browsing a single user's (large) share can end up consuming hundreds of megabytes, and keeping multiple such share browse windows open can often spell trouble. Fortunately, it's the easiest of the three to phase the data system out of and replace with memory-efficient standard containers (essentially a whole bunch of b-tree maps). Easiest, but not easy. It took me a few days to get to the point where I can actually browse someone and download something from their share, but taking the data system out doesn't only reduce memory consumption, it also increases the possibility of data inconsistencies and makes bugs more likely, so bear in the mind. On the memory consumption side, things are looking much better. Browsing a share of about 450,000 files, memory consumption just for loading the share went from about 350MB to 100MB with the Windows build which is 32-bit, and roughly double those numbers with the macOS build which is 64-bit.

I'm not completely decided on 2 and 3 yet. Removing the data system from search results may not have that much of an effect. Removing it from your own shared files will definitely help you if you're sharing hundreds of thousands of files, but it's by far the hardest to do, since your own shared files are indexed in all sorts of different exotic ways, and the only way to save client data currently, such as cached audio attributes of files you're sharing that have been scanned, is via said data system. So it will be a big project. Not off the table, but at a time when Chrome on my desktop is taking 1.5GB with four tabs open, I have to wonder if it's worth the trouble. In the meantime, these are the Windows/macOS builds for phase 1:

Linux (64-bit AppImage):

Comments can be posted to this forum thread: