11 comments

  • cryptonector2 hours ago
    ^F intern -&gt; not found.<p>But string interning is what they&#x27;re doing, isn&#x27;t it?<p>&gt; Dictionary compression is a well-known and widely applied technique. The basic idea is that you store all the unique input values within a dictionary, and then you compress the input by substituting the input values with smaller fixed-size integer keys that act as offsets into the dictionary. Building a CedarDB dictionary on our input data and compressing the data would look like this:<p>That&#x27;s string interning!!<p>Is interning just too old a concept now and it has to be rediscovered&#x2F;reinvented and renamed?
    • 09283740821 hour ago
      &gt; <i>rediscovered&#x2F;reinvented and renamed</i><p>It&#x27;s not old yet; it&#x27;ll be old once it&#x27;s been renamed two or three times...
  • crazygringo8 hours ago
    I&#x27;m genuinely surprised that there isn&#x27;t column-level shared-dictionary string compression built into SQLite, MySQL&#x2F;MariaDB <i>or</i> Postgres, like this post is describing.<p>SQLite has no compression support, MySQL&#x2F;MariaDB have page-level compression which doesn&#x27;t work great and I&#x27;ve never seen anyone enable in production, and Postgres has per-value compression which is good for extremely long strings, but useless for short ones.<p>There are just <i>so many</i> string columns where values and substrings get repeated so much, whether you&#x27;re storing names, URL&#x27;s, or just regular text. And I have databases I know would be reduced in size by at least half.<p>Is it just really really hard to maintain a shared dictionary when constantly adding and deleting values? Is there just no established reference algorithm for it?<p>It still seems like it would be worth it even if it were something you had to manually set. E.g. wait until your table has 100,000 values, build a dictionary from those, and the dictionary is set in stone and used for the next 10,000,000 rows too unless you rebuild it in the future (which would be an expensive operation).
    • ww5205 hours ago
      Strings in textual index are already compressed, with common prefix compression or other schemes. They are perfectly queryable. Not sure if their compression scheme is for index or data columns.<p>Global column dictionary has more complexity than normal. Now you are touching more pages than just the index pages and data page. The dictionary entries are sorted, so you need to worry about page expansion and contraction. They sidestep the problems by making it immutable, presumably building it up front by scanning all the data.<p>Not sure why using FSST is better than using a standard compression algorithm to compress the dictionary entries.<p>Storing the strings themselves as dictionary IDs is a good idea, as they can be processed quickly with SIMD.
      • randomuser4757 minutes ago
        &gt; Not sure why using FSST is better than using a standard compression algorithm to compress the dictionary entries.<p>I believe the reason is that FSST allows access to individual strings in the compressed corpus, which is required for fast random access. This is more important for OLTP than OLAP, I assume. More standard compression algorithms, such as zstd, might decompress very fast, but I don&#x27;t think they allow that
    • groundzeros20152 hours ago
      How do you layout all that variable length data in memory m?
    • hinkley7 hours ago
      There are some databases that can move an entire column into the index. But that&#x27;s mostly going to work for schemas where the number of distinct values is &lt;&lt;&lt; rowcount, so that you&#x27;re effectively interning the rows.
    • analyst747 hours ago
      compression is not free, dictionary compression:<p>1, complicates and slows down update, which is typically more important in OLTP than OLAP<p>2, is generally bad for high cardinality columns, which requires tracking cardinality to make decisions, which further complicates things.<p>lastly, additional operational complexity (like the table maintenance system you described in last paragraph) could reduce system reliability, and they might decide it&#x27;s not worth the price or against their philosophy.
    • pstuart3 hours ago
      Duckdb can also handle SQLite files: <a href="https:&#x2F;&#x2F;duckdb.org&#x2F;docs&#x2F;stable&#x2F;core_extensions&#x2F;sqlite" rel="nofollow">https:&#x2F;&#x2F;duckdb.org&#x2F;docs&#x2F;stable&#x2F;core_extensions&#x2F;sqlite</a>
  • ayuhito7 hours ago
    DuckDB has one of my favourite articles on this topic if you want something a little more high level: <a href="https:&#x2F;&#x2F;duckdb.org&#x2F;2022&#x2F;10&#x2F;28&#x2F;lightweight-compression" rel="nofollow">https:&#x2F;&#x2F;duckdb.org&#x2F;2022&#x2F;10&#x2F;28&#x2F;lightweight-compression</a>
  • stephantul51 minutes ago
    The compression algorithm is very similar to a greedy subword tokenizer, which is used in BERT and other older language models, but has become less popular in favor of BPE.
  • cespare2 hours ago
    This scheme reminds me of smaz (<a href="https:&#x2F;&#x2F;github.com&#x2F;antirez&#x2F;smaz" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;antirez&#x2F;smaz</a>) but with dynamically generated dictionaries.
  • lor_louis5 hours ago
    I&#x27;ve implemented a similar system based on the original 2020 paper, but we applied it to the text log to try to &quot;extract&quot; similar features from free-form text. It looked promising and even supported full regex search, but the work was ultimately abandoned when we got acquired.
  • vismit20005 hours ago
    Honorary mention: <a href="https:&#x2F;&#x2F;github.com&#x2F;pytries&#x2F;marisa-trie" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pytries&#x2F;marisa-trie</a>
  • mbfg11 hours ago
    I wonder how one does like queries.
    • hcs9 hours ago
      After decompression, with the performance characteristics you&#x27;d expect. If it has to come off disk it&#x27;s still a win or at least usually breaks even in their measurements. <a href="https:&#x2F;&#x2F;cedardb.com&#x2F;blog&#x2F;string_compression&#x2F;#query-runtime" rel="nofollow">https:&#x2F;&#x2F;cedardb.com&#x2F;blog&#x2F;string_compression&#x2F;#query-runtime</a><p>The paper suggests that you could rework string matching to work on the compressed data but they haven&#x27;t done it.
    • speed_spread10 hours ago
      s, jst cmprss ll qrs b rmvng vyls!
  • vivzkestrel3 hours ago
    pardon my stupid question, what does cedardb do that postgres, duckdb, cassandra and the 758452758421 databases that came before dont?
  • ForHackernews10 hours ago
    Never heard of CedarDB.<p>Seems to be another commercial cloud-hosted thing offering a Postgres API? <a href="https:&#x2F;&#x2F;dbdb.io&#x2F;db&#x2F;cedardb" rel="nofollow">https:&#x2F;&#x2F;dbdb.io&#x2F;db&#x2F;cedardb</a><p><a href="https:&#x2F;&#x2F;cedardb.com&#x2F;blog&#x2F;ode_to_postgres&#x2F;" rel="nofollow">https:&#x2F;&#x2F;cedardb.com&#x2F;blog&#x2F;ode_to_postgres&#x2F;</a>
    • switz10 hours ago
      I was evaluating it recently but it&#x27;s not FOSS, so buyer beware. I&#x27;m totally fine with commercialization, but I hesitate to build on top of data stores with no escape hatches or maintenance plans–especially when they&#x27;re venture backed. It is self-hostable, but not OSS.
    • cmrdporcupine9 hours ago
      It&#x27;s a startup founded by -- and built with tech coming out of research by -- some well known people in the DB research community.<p>Successor to Umbra, I believe.<p>I know somebody (quite talented) working there. It&#x27;s likely to kick ass in terms of performance.<p>But it&#x27;s hard to get people to pay for a DB these days.
      • atombender7 hours ago
        It&#x27;s probably going to be acquired. The last effort to commercialize the TUM (Technical University of Munich) database group&#x27;s work was acquired by Snowflake and disappeared into that stack.<p>CedarDB is the commercialization of Umbra, the TUM group&#x27;s in-memory database lead by professor Thomas Neumann. Umbra is a successor to HyPer, so this is the third generation of the system Neumann came up with.<p>Umbra&#x2F;CedarDB isn&#x27;t a completely new way of doing database stuff, but basically a combination of several things that rearchitect the query engine from the ground up for modern systems: A query compiler that generates native code, a buffer pool manager optimized for multi core, push-based DAG execution that divides work into batches (&quot;morsels&quot;), and in-memory Adaptive Radix Tries (never used in a database before, I think).<p>It also has an advanced query planner that embraces the latest theoretical advances in query optimization, especially some techniques to unnest complex multi-join query plans, especially with queries that have a ton of joins. The TUM group has published some great papers on this.
        • senderista6 hours ago
          Umbra is not an in-memory database (Hyper was). TUM gave up on the feasibility of in-memory databases several years ago (when the price of RAM relative to storage stopped falling).
          • cmrdporcupine6 hours ago
            Yeah I think the way Umbra was pitched when I watched the talks and read the paper was as more as &quot;hybrid&quot; in the sense that it aimed for something close to in-memory performance while optimizing the page-in&#x2F;page-out performance profile.<p>The part of Umbra I found interesting was the buffer pool, so that&#x27;s where focused most of my attention when reading though.
        • senderista6 hours ago
          Are you thinking of Hyper being acquired by Tableau?
  • semiinfinitely4 hours ago
    its a lesser-know fact that LLMs are SOTA at lossless string compression
    • boulos1 hour ago
      This isn&#x27;t strictly correct: you probably mean wrt compressed <i>size</i>. Compression is a tradeoff between size reduction and compression and decompression speed. So while things like Bellard&#x27;s tz_zip (<a href="https:&#x2F;&#x2F;bellard.org&#x2F;ts_zip&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bellard.org&#x2F;ts_zip&#x2F;</a>) or nncp compress really <i>well</i> they are <i>extremely</i> slow compared to say zstd or the much faster compression scheme in the article. It&#x27;s a totally different class of codec.