While being a big supporter of JPEG-XL on HN, I just want to note AV2 is coming out soon, which should further improve the image compression. ( Edit: Also worth pointing out current JPEG-XL encoder is no where near its maximum potential in terms of quality / compression ratio )
But JPEG-XL is being quite widely used now, from PDF, medical images, camera lossless, as well as being evaluated in different stage of cinema / artist workflow production. Hopefully the rust decoder will be ready soon.
And from the wording, it seems to imply Google Chrome will officially support anything from AOM.
AVIF/AV1 is a codec that encodes both lossy and lossless files very slowly. JXL is significantly faster than AVIF. But AVIF provides better image quality than JXL even at lower settings. However, AV2 will require much more power and system resources for a small bandwidth gain.
> AV2 .... further improve the image compression. ( Edit: Also worth pointing out current JPEG-XL encoder is no where near its maximum potential in terms of quality / compression ratio
But at what cost? From this the en/decoding speed (links below) is much higher for those advanced video codecs, so for various lower powered devices they wouldn't be very suitable?
Also, can we expect "near max potential" with AV2/near future or is it an ever-unachievable goal that shouldn't stop adding "non-max" codecs?
JPEG-XL is both a lossy and lossless codec. It is already being used in Camera DNG format, making the RAW image smaller.
While lossy codec is hard to compare and up for debate. JPEG-XL is actually better as a lossless codec in terms of compression ratio and compression complexity. There is only one other codec that beats it but it is not open source.
HALIC is by far the best lossless codec in terms of speed/compression ratio. If lossy mode were similarly available, we might not be discussing all these issues. I think he stopped developing HALIC for a long time due to lack of interest.
Its developer is also developing HALAC (High Availability Lossless Audio Compression). He recently released the source code for the first version of HALAC. And I don't think anyone cared.
As in, a clear way to detect whether a given file is lossy or lossless?
I was thinking that too, but on the other hand, even a lossless file can't guarantee that its contents aren't the result of going through a lossy intermediate format, such as a screenshot created from a JPEG.
I find it incredibly helpful to know that .jpg is lossy and .png is lossless.
There are so many reasons why it's almost hard to know where to begin. But it's basically the same reason why it's helpful for some documents to end in .docx and others to end in .xlsx. It tells you what kind of data is inside.
And at least for me, for standard 24-bit RGB images, the distinction between lossy and lossless is much more important than between TIFF and PNG, or between JPG and HEIC. Knowing whether an image is degraded or not is the #1 important fact about an image for me, before anything else. It says so much about what the file is for and not for -- how I should or shouldn't edit it, what kind of format and compression level is suitable for saving after editing, etc.
After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
There's a good reason Microsoft Office documents aren't all just something like .msox, with an internal tag indicating whether they're a text document or a spreadsheet or a presentation. File extensions carry semantic meaning around the type of data they contain, and it's good practice to choose extensions that communicate the most important conceptual distinctions.
>I find it incredibly helpful to know that .jpg is lossy and .png is lossless.
Unfortunately we have been through this discussion and author of JPEG-XL strongly disagree with this. I understand where they are coming from, but for me I agree with you it would have been easier to have the two separated in naming and extensions.
Legacy. It’s how things used to be done. Just like Unix permissions, shared filesystem, drive letters in the file system root, prefixing urls with the protocol, including security designators in the protocol name…
Be careful to ascribe reason to established common practices; it can lead to tunnel vision. Computing is filled with standards which are nothing more than “whatever the first guy came up with”.
> Knowing whether an image is degraded or not is the #1 important fact about an image for me
But how can you know that from the fact that it's currently losslessly encoded? People take screenshots of JPEGs all the time.
> After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
That is a useful distinction in my view, and there's some precedent for solutions, such as how Office files containing macros having an "m" added to their file extension.
Obviously nothing prevents people from taking PNG screenshots of JPEGs. You can make a PNG out of an out-of-focus camera image too. But at least I know the format itself isn't adding any additional degradation over whatever the source was.
And in my case I'm usually dealing with a known workflow. I know where the files originally come from, whether .raw or .ai or whatever. It's very useful to know that every .jpg file is meant for final distribution, whereas every .png file is part of an intermediate workflow where I know quality won't be lost. When they all have the same extension, it's easy to get confused about which stage a certain file belongs to, and accidentally mix up assets.
But JPEG has a lossless mode as well. How do you distinguish between the two now?
This is an arbitrary distinction, for example then why do mp3 and ogg (vorbis) have different extensions? They're both lossy audio formats, so by that requirement, the extension should be the same.
Otherwise, we should distinguish between bitrates with different extensions, eg mp3128, mp3192, etc.
In theory JPEG has a lossless mode (in the standard), but it's not supported by most applications (not even libjpeg) so it might as well not exist. I've certainly never come across a lossless JPEG file in the wild.
Filenames also of course try to indicate technical compatibility as to what applications can open them, which is why .mp3 and .ogg are different -- although these days, extensions like .mkv and .mp4 tell you nothing about what's in them, or whether your video player can play a specific file.
At the end of the day it's just trying to achieve a good balance. Obviously including the specific bitrate in a file extension goes too far.
Think of all the use cases where the output is going to be ingested by another machine. You don't know that "perceptually lossless" as designed for normal human eyeballs on normal screens in normal lighting environments is going to contain all the information an ML system will use. You want to preserve data as long as possible, until you make an active choice to throw it away. Even the system designer may not know whether it's appropriate to throw that information away, for example if they're designing digital archival systems and having to consider future users who aren't available to provide requirements.
Rick specifically said commitment for long term maintenance and meeting usual standards for shipping. The implementation was abandoned in favor of a new one using rust, so not necessarily a dead end.
My introduction to JPEG-XL was by 2kliksphillip on YouTube, he has a few really good analyses on this topic, including this video: https://youtu.be/FlWjf8asI4Y
> The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents.
glad those folks didn't listen to "the format is dead since the biggest browser doesn't support it" (and shame on Firefox for not doing the same)
I think the article is slightly misleading: it says "Google has resumed work on JPEG XL", but I don't think they have - their announcement only says they "would welcome contributions" to implement JPEG XL support. In other words, Google won't do it themselves, but their new position is they're now willing to allow someone else to do the work.
Describing it as 'Google' is misleading, because different arms of the company might as well be completely different companies. The Chrome org seems to have had the same stance as Firefox with regards to JPEG XL: "we don't want to add 100,000 lines of multithreaded C++ because it's a giant gaping security risk", and the JPEG XL team (in a completely separate org) is addressing those concerns by implementing a Rust version. I'd guess that needing the "commitment to long-term maintenance" is Chrome fighting with Google Research or whatever about long-term headcount allocation towards support: Chrome doesn't want the JPEG XL team to launch and abandon JPEG XL in chrome and leaving Chrome engineers to deal with the fallout.
It's technically correct. Googlers (at Google Research Zurich) have been working on jxl-rs, a Rust implementation of JPEG XL. Google Research has been involved in JPEG XL from the beginning, both in the design of the codec and in the implementation of libjxl and now jxl-rs.
But until now, the position of other Googlers (in the Chrome team) was that they didn't want to have JPEG XL support in Chrome. And that changed now. Which is a big deal.
Yes and they will also only accept it if the library is written in Rust. The patch to add support that is in the thread, and referenced in the article uses libjxl which is C++ and therefore cannot be used.
if you wanna compare jxl vs avif by taking photos yourself and have an android phone
then try this APK https://github.com/particlo/camataca
i thought jxl was better by looking at its website benchmarks
but then after trying it myself i find jxl generates ugly blocky artifacts
It's a huge piece for sure, but not the only one. For example, Firefox and Windows both don't support it out of the box currently. Firefox requires nightly or an extension, and on Windows you need to download support from the Microsoft store.
> on Windows you need to download support from the Microsoft store.
To be really fair, on Windows:
- H.264 is the only guaranteed (modern-ish) video codec (HEVC, VP9, AV1 is not built-in unless the device manufacturer bothered to do it)
- JPEG, GIF, and PNG are the only guaranteed (widely-used) image codecs (HEIF, AVIF, and JXL is also not built-in)
- MP3 and AAC are the only guaranteed (modern-ish) audio codecs (Opus is another module)
... and all of them are widely used when Windows 7 was released (before the modern codecs) so probably modules are now the modern Windows Method™ for codecs.
Note on pre-8 HEVC support: the codec (when not on VLC or other software bundling their own codecs) is often on that CyberLink Bluray player, not a built-in one.
A *PDF* with embedded JPEG 2000 data should, as far as I know, decode in modern browser PDF viewers. PDF.js and PDFium both are using OpenJPEG. But despite that, browsers don't currently support JPEG 2000 in general.
I'm saying this to explain how JPEG XL support in PDF isn't a silver bullet. Browsers already support image formats in PDF that are not supported outside of PDF.
A large and important piece, but not the final. If it will remain web-only codec, that is no Android and iOS support for taking photos in JPEG XL, then the web media will still be dominated with JPEGs.
2026 is nearly upon us, and Google, Microsoft, and Apple remain steadfast in the refusal to ever allow anyone to share wide-gamut or HDR images.
Every year, I go on a rant about how my camera can take HDR images natively, but the only way to share these with a wider audience is to convert them to a slideshow and make a Rec.2020 HDR movie that I upload to YouTube.
It's absolutely bonkers to me that we've all collectively figured out how to stream a Hollywood movie to a pocket device over radio with a quality exceeding that of a typical cinema theatre, but these multi-trillion market cap corporations have all utterly failed to allow users to reliably send a still image with the same quality to each other!
Any year now, maybe in 2030s, someone will get around to a ticket that is currently at position 11,372 down the list below thousands of internal bullshit that nobody needed done, rearranging a dashboard nobody has ever opened, or whatever, and get around to letting computers be used for images. You know, utilising the screen, the only part billions of users ever look at, with their human eyes.
I can't politely express my disgust at the ineptitude, the sloth, the foot dragging, the uncaring unprofessionalism of people that get paid more annually then I get in a decade who are all too distracted making Clippy 2.0 instead of getting right the most utterly fundamental aspect of consumer computing.
If I could wave a magic wand, I would force a dev team from each of these companies to remain locked in a room until this was sorted out.
I’m wondering if HDR means something different to me, because I see HDR images all the time. I can share HDR images via phones (this seems to be the default behavior on iPhone/Mac messages), I can see HDR PNG stills on the web (https://github.com/swankjesse/hdr-emojis), I can see wide gamut P3 images on the web as well (https://webkit.org/blog-files/color-gamut/).
Sure, me too! I can take a HDR P3 gamut picture with my iPhone and share it with all my friends and relatives... that have iPhones.
What I cannot do is take a picture with a $4000 Nikon DSLR and share it in the same way... unless I also buy a Mac so I can encode it in the magic Apple-only format[1] that works... for Mac and IOS users. I have a Windows PC. Linux users are similarly out in the cold.
This situation so incredibly bad that I can pop the SD card of my camera into an reader plugged into my iPhone, process the RAW image on the iPhone with the Lightroom iPhone app in full, glorious HDR... and then be unable to export the HDR image onto the same device for viewing because oh-my-fucking-god-why!?
[1] They claim it is a standards-compliant HEIF file. No, it isn't. That's a filthy lie. My camera produces a HDR HEIF file natively, in-body. Everything opens it just fine, except all Apple ecosystem devices. I suspect the only way to get Apple to budge is to sue them for false advertising. But... sigh... they'll just change their marketing to remove "HEIF" and move on.
Sure, but Apple doesn't fully support HEIC either.
They support only a very specific subset of it, in a particular combination.
Some Apple apps can open third-party HEIC-in-HEIF files, and even display the image correctly, but if you try anything more "complex", it'll start failing. Simply forwarding the image to someone else will result in thumbnails looking weirdly corrupted, brightness shifting, etc...
I've even seen outright crashes, hangs, visible memory corruption, etc...
I bet there's at least one exploitable security vulnerability in this code!
I think Ultra HDR (and Apple's take on it, ISO 21496-1) make a lot of sense in a scenario where shipping alternate formats/codecs is not viable because renderer capabilities are not known or vary, similarly to how HDR was implemented on Blu-Ray 4K discs with the backwards-compatible Dolby Vision profiles.
It's also possible to do what Apple has done for HEIC on iOS: Store the modern format, convert to the best-known supported format at export/sharing time.
Indeed. I tried every possible export format from Adobe Lightroom including JPG + HDR gainmaps, and it looks... potato.
With a narrow gamut like sRGB it looks only slightly better than JPG, but with a wider gamut you get terrible posterization. People's faces turn grey and green and blue skies get bands across them.
Meanwhile my iPhone creates flawless 10-bit Dolby Vision video with the press of a button that I can share with anyone without it turning into a garbled mess.
Just last week I checked up on the "state of the art" for HDR still image sharing with Gemini Deep Research and after ten minutes of trawling through obscure forum posts it came back with a blunt "No".
We've figured out how to make machines think, but not how to exchange pictures in the quality that my 12-year-old DSLR is capable of capturing!
... unless I make a YouTube video with the images. That -- and only that -- works!
Just use PNG: https://www.w3.org/TR/png-3/ (for HDR content, see the cICP, mDCV and cLLI chunks; also note that PNG supports up to 16-bit channel depth out of the box).
> the only way to share these with a wider audience is to convert them to a slideshow and make a Rec.2020 HDR movie that I upload to YouTube
i understand some of this frustration, but really you just have to use ffmpeg to convert it to a web format (which can be done by ffmpeg.js running in a service worker if your cpu is expensive) and spell <img as <video muted autoplay playsinline which is only a little annoying
> I can't politely express my disgust at the ineptitude, the sloth, the foot dragging, the uncaring unprofessionalism of people that get paid more annually then I get in a decade who are all too distracted making Clippy 2.0 instead of getting right the most utterly fundamental aspect of consumer computing.
hear hear
> If I could wave a magic wand, I would force a dev team from each of these companies to remain locked in a room until this was sorted out.
i can think of a few better uses for such a wand...
> It's absolutely bonkers to me that we've all collectively figured out how to stream a Hollywood movie to a pocket device over radio with a quality exceeding that of a typical cinema theatre, but these multi-trillion market cap corporations have all utterly failed to allow users to reliably send a still image with the same quality to each other!
You act like this is some kind of mistake or limit of technology, but really it's an obvious intentional business decision.
Under late stage capitalism, it'd be weird if this wasn't the case in 2026.
Address the underlying issue, or don't be surprised by the race to the bottom.
This theory utterly fails Hanlon's razor (or whatever the organizational/societal equivalent is).
On one hand, there have been (and still are!) several competing HDR formats for videos (HDR+, Dolby Vision, "plain" HLG, Dolby Vision in HLG etc.), and it tooks years for a winner to pull ahead – that race just started earlier, and the set of stakeholders is different (and arguably a bit smaller) than that for still images.
On the other hand, there are also several still image HDR formats competing with each other right now (JPEG with depth map metadata, i.e. Ultra HDR and ISO 21496-1, Apple's older custom metadata, HEIF, AVIF, JPEG XL...), and JPEG XL isn't the clear winner yet.
Format wars are messy, and always have been. Yes, to some extent they are downstream of the lack of a central standardization body, but there's no anti-HDR cabal anywhere. If anything, it's the opposite – new AV formats requiring new hardware is just about the best thing that can happen to device manufacturers.
What are you talking about? You extract 3 exposure values from the raw camera buffer and merge and tone map them manually into a single HDR image. The final exported image format may not have the full supported color space, but that’s on you. Apple uses the P3 space by default.
This has been supported by both Apple and third party apps for over a decade. I’ve implemented it myself.
That's not HDR. That's pretend HDR in an SDR file, an artistic effect, nothing more.
Actual HDR needs at least 10 bits per channel and a modern display with peak brightness far in excess of traditional monitors. Ideally over 1,000 nits compared to typical LCD brightness of about 200.
You also don't need "three pictures". That was a hack used for the oldest digital cameras that had about 8 bits of precision in their analog to digital converters (ADC). Even my previous camera had a 14-bit ADC and in practice could capture about 12.5 bits of dynamic range, which is plenty for HDR imaging.
Lightroom can now edit and export images in "true" HDR, basically the same as a modern HDR10 or Dolby Vision movie.
The problem is that the only way to share the exported HDR images is to convert them to a movie file format, and share them as a slide show.
There is no widely compatible still image format that can preserve 10-bit-per-channel colours, wide-gamut, and HDR metadata.
> Actual HDR needs at least 10 bits per channel and a modern display with peak brightness far in excess of traditional monitors. Ideally over 1,000 nits compared to typical LCD brightness of about 200.
In the Apple Silicon era, the MacBook Pro has a 1,000 nit display, with peak brightness at 1,600 nits when displaying HDR content.
Affinity Studio [1] also supports editing and exporting "true" HDR images.
I have a 4K HDR OLED plugged into my Windows PC that works just fine for editing and viewing my photos.
I have no way, in general, to share those photos with you, not without knowing ahead of time what software you’re using. I’ll also have to whip up a web server with custom HTML and a bunch of hacks to encode my images that will work for you but not my friends with Android phones or Linux PCs.
I never mentioned a file format. These operations are performed on the raw buffer, there is no hack. There is no minimum bit depth for HDR (except for maybe 2) that’s just silly. High dynamic range images just remap the physical light waves to match human perception, but collecting those waves can be done at any resolution or bit depth.
I wrote camera firmware. I’ve implemented HDR on the both the firmware level, and later at the higher client level when devices became faster. You’re either overloading terminology to the point where we are just talking past each other, or you’re very confused.
What you are taking about is also called HDR, but has nothing to do with what the other person is talking about. The other person is talking about the still image equivalent of HDR video formats. When displayed on an HDR capable monitor, it will map the brightest parts of the image to the extended headroom of the monitor instead of tone mapping it to be displayed on a standard SDR monitor. So to be even more clear: it defines brightness levels beyond what is normally 100%.
It is absolutely insane that google has not implemented this yet. They implement all sort of unimportant stuff but not the most critical image format of this decade, what a joke
If all goes well (which is anything but guaranteed), JPEG XL will take off sufficiently to make any future deprecation as unthinkable as e.g. deprecating GIF rendering support.
While being a big supporter of JPEG-XL on HN, I just want to note AV2 is coming out soon, which should further improve the image compression. ( Edit: Also worth pointing out current JPEG-XL encoder is no where near its maximum potential in terms of quality / compression ratio )
But JPEG-XL is being quite widely used now, from PDF, medical images, camera lossless, as well as being evaluated in different stage of cinema / artist workflow production. Hopefully the rust decoder will be ready soon.
And from the wording, it seems to imply Google Chrome will officially support anything from AOM.
AVIF/AV1 is a codec that encodes both lossy and lossless files very slowly. JXL is significantly faster than AVIF. But AVIF provides better image quality than JXL even at lower settings. However, AV2 will require much more power and system resources for a small bandwidth gain.
> AV2 .... further improve the image compression. ( Edit: Also worth pointing out current JPEG-XL encoder is no where near its maximum potential in terms of quality / compression ratio
But at what cost? From this the en/decoding speed (links below) is much higher for those advanced video codecs, so for various lower powered devices they wouldn't be very suitable?
Also, can we expect "near max potential" with AV2/near future or is it an ever-unachievable goal that shouldn't stop adding "non-max" codecs?
https://res.cloudinary.com/cloudinary-marketing/image/upload...
https://cloudinary.com/blog/time_for_next_gen_codecs_to_deth...
>medical images
Isn't JPEG-XL a lossy codec?
JPEG-XL is both a lossy and lossless codec. It is already being used in Camera DNG format, making the RAW image smaller.
While lossy codec is hard to compare and up for debate. JPEG-XL is actually better as a lossless codec in terms of compression ratio and compression complexity. There is only one other codec that beats it but it is not open source.
What is the non-open source codec?
HALIC is by far the best lossless codec in terms of speed/compression ratio. If lossy mode were similarly available, we might not be discussing all these issues. I think he stopped developing HALIC for a long time due to lack of interest.
Its developer is also developing HALAC (High Availability Lossless Audio Compression). He recently released the source code for the first version of HALAC. And I don't think anyone cared.
HALIC (High Availability Lossless Image Compression)
https://news.ycombinator.com/item?id=38990568
It has both lossy and lossless modes.
Good to hear.
I sure hope they came up with a good, clear system to distinguish them.
As in, a clear way to detect whether a given file is lossy or lossless?
I was thinking that too, but on the other hand, even a lossless file can't guarantee that its contents aren't the result of going through a lossy intermediate format, such as a screenshot created from a JPEG.
I meant like a filename convention, and tags in the file itself.
There is some sort of tag, jxlinfo can tell you if a file is "lossy" or "(possibly) lossless".
Presumably you can look at the file and tell which mode is used, though why would you care to know from the filename?
I find it incredibly helpful to know that .jpg is lossy and .png is lossless.
There are so many reasons why it's almost hard to know where to begin. But it's basically the same reason why it's helpful for some documents to end in .docx and others to end in .xlsx. It tells you what kind of data is inside.
And at least for me, for standard 24-bit RGB images, the distinction between lossy and lossless is much more important than between TIFF and PNG, or between JPG and HEIC. Knowing whether an image is degraded or not is the #1 important fact about an image for me, before anything else. It says so much about what the file is for and not for -- how I should or shouldn't edit it, what kind of format and compression level is suitable for saving after editing, etc.
After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
There's a good reason Microsoft Office documents aren't all just something like .msox, with an internal tag indicating whether they're a text document or a spreadsheet or a presentation. File extensions carry semantic meaning around the type of data they contain, and it's good practice to choose extensions that communicate the most important conceptual distinctions.
>I find it incredibly helpful to know that .jpg is lossy and .png is lossless.
Unfortunately we have been through this discussion and author of JPEG-XL strongly disagree with this. I understand where they are coming from, but for me I agree with you it would have been easier to have the two separated in naming and extensions.
Legacy. It’s how things used to be done. Just like Unix permissions, shared filesystem, drive letters in the file system root, prefixing urls with the protocol, including security designators in the protocol name…
Be careful to ascribe reason to established common practices; it can lead to tunnel vision. Computing is filled with standards which are nothing more than “whatever the first guy came up with”.
https://en.wikipedia.org/wiki/Appeal_to_tradition
Just because metadata is useful doesn’t mean it needs to live in the filename.
> Knowing whether an image is degraded or not is the #1 important fact about an image for me
But how can you know that from the fact that it's currently losslessly encoded? People take screenshots of JPEGs all the time.
> After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.
That is a useful distinction in my view, and there's some precedent for solutions, such as how Office files containing macros having an "m" added to their file extension.
Obviously nothing prevents people from taking PNG screenshots of JPEGs. You can make a PNG out of an out-of-focus camera image too. But at least I know the format itself isn't adding any additional degradation over whatever the source was.
And in my case I'm usually dealing with a known workflow. I know where the files originally come from, whether .raw or .ai or whatever. It's very useful to know that every .jpg file is meant for final distribution, whereas every .png file is part of an intermediate workflow where I know quality won't be lost. When they all have the same extension, it's easy to get confused about which stage a certain file belongs to, and accidentally mix up assets.
But JPEG has a lossless mode as well. How do you distinguish between the two now?
This is an arbitrary distinction, for example then why do mp3 and ogg (vorbis) have different extensions? They're both lossy audio formats, so by that requirement, the extension should be the same.
Otherwise, we should distinguish between bitrates with different extensions, eg mp3128, mp3192, etc.
In theory JPEG has a lossless mode (in the standard), but it's not supported by most applications (not even libjpeg) so it might as well not exist. I've certainly never come across a lossless JPEG file in the wild.
Filenames also of course try to indicate technical compatibility as to what applications can open them, which is why .mp3 and .ogg are different -- although these days, extensions like .mkv and .mp4 tell you nothing about what's in them, or whether your video player can play a specific file.
At the end of the day it's just trying to achieve a good balance. Obviously including the specific bitrate in a file extension goes too far.
Surely something close to perceptually lossless is sufficient for most use cases?
Think of all the use cases where the output is going to be ingested by another machine. You don't know that "perceptually lossless" as designed for normal human eyeballs on normal screens in normal lighting environments is going to contain all the information an ML system will use. You want to preserve data as long as possible, until you make an active choice to throw it away. Even the system designer may not know whether it's appropriate to throw that information away, for example if they're designing digital archival systems and having to consider future users who aren't available to provide requirements.
Here are the direct links:
blink-dev mailing list
https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...
Tracking Bug (reopened)
https://issues.chromium.org/issues/40168998
Yeah note that Google only said they're now open to the possibility, as long as it is written in Rust (rightly so).
The patch at the end of that thread uses a C++ implementation so it is a dead end.
Rick specifically said commitment for long term maintenance and meeting usual standards for shipping. The implementation was abandoned in favor of a new one using rust, so not necessarily a dead end.
I meant the C++ patch is a dead end; not JPEG XL support in general. Seems like there's a Rust library that will have to be used instead.
My introduction to JPEG-XL was by 2kliksphillip on YouTube, he has a few really good analyses on this topic, including this video: https://youtu.be/FlWjf8asI4Y
Maybe they'll do it right this time
> The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents.
glad those folks didn't listen to "the format is dead since the biggest browser doesn't support it" (and shame on Firefox for not doing the same)
I think the article is slightly misleading: it says "Google has resumed work on JPEG XL", but I don't think they have - their announcement only says they "would welcome contributions" to implement JPEG XL support. In other words, Google won't do it themselves, but their new position is they're now willing to allow someone else to do the work.
Describing it as 'Google' is misleading, because different arms of the company might as well be completely different companies. The Chrome org seems to have had the same stance as Firefox with regards to JPEG XL: "we don't want to add 100,000 lines of multithreaded C++ because it's a giant gaping security risk", and the JPEG XL team (in a completely separate org) is addressing those concerns by implementing a Rust version. I'd guess that needing the "commitment to long-term maintenance" is Chrome fighting with Google Research or whatever about long-term headcount allocation towards support: Chrome doesn't want the JPEG XL team to launch and abandon JPEG XL in chrome and leaving Chrome engineers to deal with the fallout.
It's technically correct. Googlers (at Google Research Zurich) have been working on jxl-rs, a Rust implementation of JPEG XL. Google Research has been involved in JPEG XL from the beginning, both in the design of the codec and in the implementation of libjxl and now jxl-rs.
But until now, the position of other Googlers (in the Chrome team) was that they didn't want to have JPEG XL support in Chrome. And that changed now. Which is a big deal.
Yes and they will also only accept it if the library is written in Rust. The patch to add support that is in the thread, and referenced in the article uses libjxl which is C++ and therefore cannot be used.
if you wanna compare jxl vs avif by taking photos yourself and have an android phone then try this APK https://github.com/particlo/camataca i thought jxl was better by looking at its website benchmarks but then after trying it myself i find jxl generates ugly blocky artifacts
jxl-rs https://github.com/libjxl/jxl-rs was referenced as a possibility; what library is Safari using for jpegxl?
libjxl:
https://github.com/libjxl/libjxl
https://github.com/WebKit/WebKit/blob/7879cb55638ec765dc033d...
It's a little step but a step forward. JXL is on part with AVIF and WebP2 most of the time but is very much better to share photography.
There is no reason to block its adoption.
The final piece of the JPEG XL puzzle!
It's a huge piece for sure, but not the only one. For example, Firefox and Windows both don't support it out of the box currently. Firefox requires nightly or an extension, and on Windows you need to download support from the Microsoft store.
> on Windows you need to download support from the Microsoft store.
To be really fair, on Windows:
- H.264 is the only guaranteed (modern-ish) video codec (HEVC, VP9, AV1 is not built-in unless the device manufacturer bothered to do it)
- JPEG, GIF, and PNG are the only guaranteed (widely-used) image codecs (HEIF, AVIF, and JXL is also not built-in)
- MP3 and AAC are the only guaranteed (modern-ish) audio codecs (Opus is another module)
... and all of them are widely used when Windows 7 was released (before the modern codecs) so probably modules are now the modern Windows Method™ for codecs.
Note on pre-8 HEVC support: the codec (when not on VLC or other software bundling their own codecs) is often on that CyberLink Bluray player, not a built-in one.
Would PDF 2.0 (which also depends JPEG XL and Brotli) put pressure on Firefox and Windows to add more easy to use support?
I don't think so: JPEG 2000, as far as I know, isn't generally supported for web use in web browsers, but it is supported in PDF.
JPEG-XL is recommended as the preferred format for HDR content for PDFs, so it’s more likely to be encountered:
https://www.theregister.com/2025/11/10/another_chance_for_jp...
I'm not convinced HDR PDFs will be a common thing anytime soon, even without this chicken and egg problem of support
What I mean to say is, I believe browsers do support JPEG 2000 in PDF, just not on the web.
the last time that I check it, I find that I need to convert to Jpeg to show the image in browsers.
A *PDF* with embedded JPEG 2000 data should, as far as I know, decode in modern browser PDF viewers. PDF.js and PDFium both are using OpenJPEG. But despite that, browsers don't currently support JPEG 2000 in general.
I'm saying this to explain how JPEG XL support in PDF isn't a silver bullet. Browsers already support image formats in PDF that are not supported outside of PDF.
So Firefox (or others) can't open a pdf with a embedded jpeg-2000/XL? Or does pdf.js somehow support it?
Apparently I really flubbed my wording for this comment. I'm saying they do support it inside of PDF, just not elsewhere in the web platform.
Seems like it: https://github.com/mozilla/pdf.js.openjpeg
This test renders correctly in Firefox, in any case: https://sources.debian.org/data/main/p/pdf2djvu/0.9.18.2-2/t...
A large and important piece, but not the final. If it will remain web-only codec, that is no Android and iOS support for taking photos in JPEG XL, then the web media will still be dominated with JPEGs.
How quickly things turn. Hard to not support it given chrome wants to support PDF natively.
2026 is nearly upon us, and Google, Microsoft, and Apple remain steadfast in the refusal to ever allow anyone to share wide-gamut or HDR images.
Every year, I go on a rant about how my camera can take HDR images natively, but the only way to share these with a wider audience is to convert them to a slideshow and make a Rec.2020 HDR movie that I upload to YouTube.
It's absolutely bonkers to me that we've all collectively figured out how to stream a Hollywood movie to a pocket device over radio with a quality exceeding that of a typical cinema theatre, but these multi-trillion market cap corporations have all utterly failed to allow users to reliably send a still image with the same quality to each other!
Any year now, maybe in 2030s, someone will get around to a ticket that is currently at position 11,372 down the list below thousands of internal bullshit that nobody needed done, rearranging a dashboard nobody has ever opened, or whatever, and get around to letting computers be used for images. You know, utilising the screen, the only part billions of users ever look at, with their human eyes.
I can't politely express my disgust at the ineptitude, the sloth, the foot dragging, the uncaring unprofessionalism of people that get paid more annually then I get in a decade who are all too distracted making Clippy 2.0 instead of getting right the most utterly fundamental aspect of consumer computing.
If I could wave a magic wand, I would force a dev team from each of these companies to remain locked in a room until this was sorted out.
I’m wondering if HDR means something different to me, because I see HDR images all the time. I can share HDR images via phones (this seems to be the default behavior on iPhone/Mac messages), I can see HDR PNG stills on the web (https://github.com/swankjesse/hdr-emojis), I can see wide gamut P3 images on the web as well (https://webkit.org/blog-files/color-gamut/).
What am I missing?
> I can share HDR images via phones
Sure, me too! I can take a HDR P3 gamut picture with my iPhone and share it with all my friends and relatives... that have iPhones.
What I cannot do is take a picture with a $4000 Nikon DSLR and share it in the same way... unless I also buy a Mac so I can encode it in the magic Apple-only format[1] that works... for Mac and IOS users. I have a Windows PC. Linux users are similarly out in the cold.
This situation so incredibly bad that I can pop the SD card of my camera into an reader plugged into my iPhone, process the RAW image on the iPhone with the Lightroom iPhone app in full, glorious HDR... and then be unable to export the HDR image onto the same device for viewing because oh-my-fucking-god-why!?
[1] They claim it is a standards-compliant HEIF file. No, it isn't. That's a filthy lie. My camera produces a HDR HEIF file natively, in-body. Everything opens it just fine, except all Apple ecosystem devices. I suspect the only way to get Apple to budge is to sue them for false advertising. But... sigh... they'll just change their marketing to remove "HEIF" and move on.
Not that I disagree, but HEIF is a container format. What is inside that container is essential. HEIC in HEIF, AVIF in HEIF, etc.
Sure, but Apple doesn't fully support HEIC either.
They support only a very specific subset of it, in a particular combination.
Some Apple apps can open third-party HEIC-in-HEIF files, and even display the image correctly, but if you try anything more "complex", it'll start failing. Simply forwarding the image to someone else will result in thumbnails looking weirdly corrupted, brightness shifting, etc...
I've even seen outright crashes, hangs, visible memory corruption, etc...
I bet there's at least one exploitable security vulnerability in this code!
The web has supported 16 bit pngs for decades. This is enough bits for more dynamic range than a human eye with a fixed pupil size.
It is incredibly annoying that instead of adopting JpegXL they decided to use UltraHDR. A giant hack which works very poorly.
That's backwards compatibility for you.
I think Ultra HDR (and Apple's take on it, ISO 21496-1) make a lot of sense in a scenario where shipping alternate formats/codecs is not viable because renderer capabilities are not known or vary, similarly to how HDR was implemented on Blu-Ray 4K discs with the backwards-compatible Dolby Vision profiles.
It's also possible to do what Apple has done for HEIC on iOS: Store the modern format, convert to the best-known supported format at export/sharing time.
> A giant hack which works very poorly.
Indeed. I tried every possible export format from Adobe Lightroom including JPG + HDR gainmaps, and it looks... potato.
With a narrow gamut like sRGB it looks only slightly better than JPG, but with a wider gamut you get terrible posterization. People's faces turn grey and green and blue skies get bands across them.
Meanwhile my iPhone creates flawless 10-bit Dolby Vision video with the press of a button that I can share with anyone without it turning into a garbled mess.
Just last week I checked up on the "state of the art" for HDR still image sharing with Gemini Deep Research and after ten minutes of trawling through obscure forum posts it came back with a blunt "No".
We've figured out how to make machines think, but not how to exchange pictures in the quality that my 12-year-old DSLR is capable of capturing!
... unless I make a YouTube video with the images. That -- and only that -- works!
Just use PNG: https://www.w3.org/TR/png-3/ (for HDR content, see the cICP, mDCV and cLLI chunks; also note that PNG supports up to 16-bit channel depth out of the box).
> 2026 is nearly upon us, and Google, Microsoft, and Apple remain steadfast in the refusal to ever allow anyone to share wide-gamut or HDR images.
Huh? Safari seems to render HDR JPEG XLs without any issues these days (e.g. [1]), and supports wide gamut in even more formats as far a I remember.
[1] https://jpegxl.info/resources/hdr-test-page.html
> the only way to share these with a wider audience is to convert them to a slideshow and make a Rec.2020 HDR movie that I upload to YouTube
i understand some of this frustration, but really you just have to use ffmpeg to convert it to a web format (which can be done by ffmpeg.js running in a service worker if your cpu is expensive) and spell <img as <video muted autoplay playsinline which is only a little annoying
> I can't politely express my disgust at the ineptitude, the sloth, the foot dragging, the uncaring unprofessionalism of people that get paid more annually then I get in a decade who are all too distracted making Clippy 2.0 instead of getting right the most utterly fundamental aspect of consumer computing.
hear hear
> If I could wave a magic wand, I would force a dev team from each of these companies to remain locked in a room until this was sorted out.
i can think of a few better uses for such a wand...
> <img as <video muted autoplay playsinline which is only a little annoying
Doesn't work for sharing images in text messages, social media posts, email, Teams, Wikipedia, etc...
> i can think of a few better uses for such a wand...
We all have our priorities.
I wish I could upvote this multiple times. Spot on, the situation is completely batshit bonkers insane.
> It's absolutely bonkers to me that we've all collectively figured out how to stream a Hollywood movie to a pocket device over radio with a quality exceeding that of a typical cinema theatre, but these multi-trillion market cap corporations have all utterly failed to allow users to reliably send a still image with the same quality to each other!
You act like this is some kind of mistake or limit of technology, but really it's an obvious intentional business decision.
Under late stage capitalism, it'd be weird if this wasn't the case in 2026.
Address the underlying issue, or don't be surprised by the race to the bottom.
This theory utterly fails Hanlon's razor (or whatever the organizational/societal equivalent is).
On one hand, there have been (and still are!) several competing HDR formats for videos (HDR+, Dolby Vision, "plain" HLG, Dolby Vision in HLG etc.), and it tooks years for a winner to pull ahead – that race just started earlier, and the set of stakeholders is different (and arguably a bit smaller) than that for still images.
On the other hand, there are also several still image HDR formats competing with each other right now (JPEG with depth map metadata, i.e. Ultra HDR and ISO 21496-1, Apple's older custom metadata, HEIF, AVIF, JPEG XL...), and JPEG XL isn't the clear winner yet.
Format wars are messy, and always have been. Yes, to some extent they are downstream of the lack of a central standardization body, but there's no anti-HDR cabal anywhere. If anything, it's the opposite – new AV formats requiring new hardware is just about the best thing that can happen to device manufacturers.
What are you talking about? You extract 3 exposure values from the raw camera buffer and merge and tone map them manually into a single HDR image. The final exported image format may not have the full supported color space, but that’s on you. Apple uses the P3 space by default.
This has been supported by both Apple and third party apps for over a decade. I’ve implemented it myself.
That's not HDR. That's pretend HDR in an SDR file, an artistic effect, nothing more.
Actual HDR needs at least 10 bits per channel and a modern display with peak brightness far in excess of traditional monitors. Ideally over 1,000 nits compared to typical LCD brightness of about 200.
You also don't need "three pictures". That was a hack used for the oldest digital cameras that had about 8 bits of precision in their analog to digital converters (ADC). Even my previous camera had a 14-bit ADC and in practice could capture about 12.5 bits of dynamic range, which is plenty for HDR imaging.
Lightroom can now edit and export images in "true" HDR, basically the same as a modern HDR10 or Dolby Vision movie.
The problem is that the only way to share the exported HDR images is to convert them to a movie file format, and share them as a slide show.
There is no widely compatible still image format that can preserve 10-bit-per-channel colours, wide-gamut, and HDR metadata.
> Actual HDR needs at least 10 bits per channel and a modern display with peak brightness far in excess of traditional monitors. Ideally over 1,000 nits compared to typical LCD brightness of about 200.
In the Apple Silicon era, the MacBook Pro has a 1,000 nit display, with peak brightness at 1,600 nits when displaying HDR content.
Affinity Studio [1] also supports editing and exporting "true" HDR images.
[1]: https://www.affinity.studio
I have a 4K HDR OLED plugged into my Windows PC that works just fine for editing and viewing my photos.
I have no way, in general, to share those photos with you, not without knowing ahead of time what software you’re using. I’ll also have to whip up a web server with custom HTML and a bunch of hacks to encode my images that will work for you but not my friends with Android phones or Linux PCs.
I never mentioned a file format. These operations are performed on the raw buffer, there is no hack. There is no minimum bit depth for HDR (except for maybe 2) that’s just silly. High dynamic range images just remap the physical light waves to match human perception, but collecting those waves can be done at any resolution or bit depth.
I wrote camera firmware. I’ve implemented HDR on the both the firmware level, and later at the higher client level when devices became faster. You’re either overloading terminology to the point where we are just talking past each other, or you’re very confused.
What you are taking about is also called HDR, but has nothing to do with what the other person is talking about. The other person is talking about the still image equivalent of HDR video formats. When displayed on an HDR capable monitor, it will map the brightest parts of the image to the extended headroom of the monitor instead of tone mapping it to be displayed on a standard SDR monitor. So to be even more clear: it defines brightness levels beyond what is normally 100%.
It is absolutely insane that google has not implemented this yet. They implement all sort of unimportant stuff but not the most critical image format of this decade, what a joke
And the things they do implement, they kill 8 or so years later.
https://killedbygoogle.com/
If all goes well (which is anything but guaranteed), JPEG XL will take off sufficiently to make any future deprecation as unthinkable as e.g. deprecating GIF rendering support.