this post was submitted on 29 Sep 2023
102 points (100.0% liked)

Technology

37738 readers
523 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

While LLMs have been used for... a lot, it seems like this use might be one where it's not only reliable but it appears to outperform existing methods of image compression. Being able to cram more data into less space tends to lead to interesting developments, so I will be keeping my eye on this.

What do you guys think? Seem like it's deserving of less hype than I'm giving it? What kind of security holes do you think this could open?

you are viewing a single comment's thread
view the rest of the comments
[–] EdgeOfToday@lemm.ee 29 points 1 year ago (16 children)

With a neural network, you wouldn't be able to mathematically prove that the signal is perfectly recovered 100% of the time for all possible inputs. That is the case with PNG and FLAC. If you're just listening to music and need a good compression ratio, then sure, it won't be a big deal if a couple of bits are wrong. But that's also why we have lossy compression. If the goal is to make signal degradation imperceptible to a human, then you could get a much better compression ratio using neural networks. If it's truly critical that the signal isn't corrupted, it would probably be better to just use the original method.

[–] ZickZack@kbin.social 27 points 1 year ago (3 children)

That's not what lossless data compression schemes do:
In lossless compression the general idea is to create a codebook of commonly occuring patterns and use those as shorthand.
For example, one of the simplest and now ancient algorithms LZW does the following:

  • Initialize the dictionary to contain all strings of length one.
  • Initialize the dictionary to contain all strings of length one.
  • Emit the dictionary index for W to output and remove W from the input.
  • Add W followed by the next symbol in the input to the dictionary.
  • repeat
    Basically, instead of rewriting long sequences, it just writes down the index into an existing dictionary of already seen sequences.

However, once this is done, you now need to find an encoding that takes your characterset (the original characters+the new dictionary references) and turns it into bits.
It turns out that we can do this optimally: Using an algorithm called Arithmetic coding we can align the length of a bitstring to the amount of information it contains.
"Information" here meaning the statistical concept of information, which depends on the inverse likelihood a certain character is observed.
Logically this makes sense:
Let's say you have a system that measures earthquakes. As one would expect, most of the time, let's say 99% of the time, you will see "no earthquake", while in 1% of the cases you will observe "earthquake".
Since "no earthquake" is a lot more common, the information gain is relatively small (if I told you "the system said no earthquake", you could have guessed that with 99% confidence: not very surprising).
However if I tell you "there is an earthquake" this is much more important and therefore is worth more information.

From information theory (a branch of mathematics), we know that if we want to maximize the efficiency of our codec, we have to match the length of every character to its information content. Arithmetic coding now gives us a general way of doing this.

However, we can do even better:
Instead of just considering individual characters, we can also add in character pairs!
Of course, it doesn't make sense to add in every possible character pair, but for some of them it makes a ton of sense:
For example, if we want to compress english text, we could give a separate codebook entry to the entire sequence "the" and save a ton of bits!
To do this for pairs of characters in the english alphabet, we have to consider 26*26=676 combinations.
We can still do that: just scan the text 600 times.
With 3 character combinations it becomes a lot harder 26*26*26=17576 combinations.
But with 4 characters its impossible: you already have half a million combinations!
In reality, this is even worse, since you have way more than 26 characters: you have things like ", . ? ! and your codebook ids which blow up the size even more!

So, how are we supposed to figure out which character pairs to combine and how many bits we should give them?
We can try to predict it!
This technique, called [PPM](Prediction by partial matching) is already very old (~1980s), but still used in many compression algorithms.
The important trick is now that with deep learning, we can train even more efficient estimators, without loosing the lossless property:
Remember, we only predict what things we want to combine, and how many bits we want to assign to them!
The worst-case scenario is that your compression gets worse because the model predicts nonsensical character-combinations to store, but that never changes the actual information you store, just how close you can get to the optimal compression.

The state-of-the-art in text compression already uses this for a long time (see Hutter Prize) it's just now getting to a stage where systems become fast and accurate enough to also make the compression useful for other domains/general purpose compression.

[–] firenzeleon@beehaw.org 5 points 1 year ago (1 children)

That’s not what lossless data compression schemes do:

Yes it is.

You went into a lot of detail about how they do it, but it's still what they do.

[–] BFrizzleFoShizzle 2 points 1 year ago

I think the main point they're disagreeing with is this:

you wouldn’t be able to mathematically prove that the signal is perfectly recovered 100% of the time for all possible inputs

They explain why you don't need 100% accuracy - most compression codecs would only use the network for a prediction, which doesn't actually have to be correct. It just has to be "more likely to be correct" than existing algorithms.

If you want to read up more on the context of these prediction functions, the general class of compression algorithms you'd use for this are called prediction wavelet codecs. FLAC and arguably PNG are both prediction wavelet codecs.

load more comments (1 replies)
load more comments (13 replies)