MessagePack: It's like JSON, but fast and small.

90 points
1/21/1970
2 days ago
by davikr

Comments


buserror

I played quite a bit with MessagePack, used it for various things, and I don't like it. My primary gripes are:

+ The Object and Array needs to be entirely and deep parsed. You cannot skip them.

+ Object and Array cannot be streamed when writing. They require a 'count' at the beginning, and since the 'count' size can vary in number of bytes, you can't even "walk back" and update it. It would have been MUCH, MUCH better to have a "begin" and "end" tag --- err pretty much like JSON has, really.

You can alleviate the problems by using extensions, store a byte count to skip etc etc but really, if you start there, might as well use another format altogether.

Also, from my tests, it is not particularly more compact, unless again you spend some time and add a hash table for keys and embed that -- but then again, at that point where it becomes valuable, might as well gzip the JSON!

So in the end it is a lot better in my experience to use some sort of 'extended' JSON format, with the idiocies removed (trailing commas, forcing double-quote for keys etc).

a day ago

crabmusket

> trailing commas, forcing double-quote for keys etc

How do these things matter in any use case where a binary protocol might be a viable alternative? These specific issues are problems for human-readability and -writability, right? But if msgpack was a viable technology for a particular use case, those concerns must already not exist.

a day ago

masklinn

> Object and Array cannot be streamed when writing. They require a 'count' at the beginning

Most languages know exactly how many elements a collection has (to say nothing of the number of members in a struct).

a day ago

atombender

Not if you're streaming input data where you cannot know the size ahead of time, and you want to pipeline the processing so that output is written in lockstep with the input. It might not be the entire dataset that's streamed.

For example, consider serializing something like [fetch(url1), join(fetch(url2), fetch(url3))]. The outer count is knowable, but the inner isn't. Even if the size of fetch(url2) and fetch(url3) are known, evaluating a join function may produce an unknown number of matches in its (streaming) output.

JSON, Protobuf, etc. can be very efficiently streamed, but it sounds like MessagePack is not designed for this. So processing the above would require pre-rendering the data in memory and then serializing it, which may require too much memory.

a day ago

baobun

> JSON, Protobuf, etc. can be very efficiently streamed

Protobuf yes, JSON no: you can't properly deserialize a JSON collection until it is fully consumed. The same issue you're highlighting for serializing MessagePack occurs when deserializing JSON. I think MessagePack is very much written with streaming in mind. It makes sense to trade write-efficiency for read-efficiency. Especially as the entity primarily affected by the tradeoff is the one making the cut, in case of msgpack. It all depends on your workloads but Ive done benchmarks for past work where msgpack came up on top. It can often be a good fit for when you need to do stuff in Redis.

(If anyone thinks to counter with JSONL, well, there's no reason you can't do the same with msgpack).

a day ago

laurencerowe

The advantage of JSON for streaming is on serialization. A server can begin streaming the response to the client before the length of the data is known.

JSON Lines is particularly helpful for JavaScript clients where streaming JSON parsers tend to be much slower than JSON.parse.

6 hours ago

atombender

Sorry, I was mentally thinking of writing mostly. With JSON the main problem is, as you say, read efficiency.

a day ago

touisteur

I think the pattern in question might be (for example) the way some people (like me) sometimes write JSON as a trace of execution, sometimes directly to stdout (so, no going back in the stream). You're not serializing a structure but directly writing it as you go. So you don't know in advance how many objects you'll have in an array.

a day ago

[deleted]
a day ago

buserror

If your code is compartimented properly, a lower layers (sub objects) doesn't have to have to do all kind of preparations just because a higher layer has "special needs".

For example, pseudo code in a sub-function:

if (that) write_field('that'); if (these) write_field('these');

With messagepack you have to go and apply the logic to count, then again to write. And keep a state for each levels etc.

a day ago

masklinn

That sounds less than compartmentalisation and more like you having special needs and being unhappy they are not catered to.

a day ago

baobun

The topic is data serialiasation formats, not programming languages.

a day ago

ubutler

Although MessagePack is definitely not a drop-in replacement for JSON, it is certainly extremely useful.

Unlike JSON, you can’t just open a MessagePack file in Notepad or vim and have it make sense. It’s often not human readable. So using MessagePack to store config files probably isn’t a good idea if you or your users will ever need to read them for debugging purposes.

But as a format for something like IPC or high-performance, low-latency communication in general, MessagePack brings serious improvements over JSON.

I recently had to build an inference server that needed to be able to communicate with an API server with minimal latency.

I started with gRPC and protobuf since it’s what everyone recommends, yet after a lot of benchmarking, I found a way faster method to be serving MessagePack over HTTP with a Litestar Python server (it’s much faster than FastAPI), using msgspec for super fast MessagePack encoding and ormsgpack for super fast decoding.

Not sure how this beat protobuf and gRPC but it did. Perhaps the Python implementation is just slow. It was still faster than JSON over HTTP, however.

a day ago

junon

Makes me wish Cap'n Proto was more prevalent and developer friendly. It ticks quite a few of my boxes.

a day ago

[deleted]
a day ago

toomim

a day ago

koito17

An approximate 20% reduction in bandwidth looks significant to me. I think the problem here is that the chart uses a linear scale instead of a logarithmic scale.

Looking at the data, I'm inclined to agree that not much CPU is saved, but the point of MessagePack is to save bandwidth, and it seems to be doing a good job at that.

a day ago

motorest

> An approximate 20% reduction in bandwidth looks significant to me.

Significante with regards to what? Not doing anything? Flipping the toggle to compress the response?

a day ago

tasuki

> An approximate 20% reduction in bandwidth looks significant to me.

To me it doesn't. There's compression for much bigger gains. Or just, you know, just send less data?

I've worked at a place where our backend regularly sent humongous jsons to all the connected clients. We were all pretty sure this could be reduced by 95%. But, who would try to do that? There wasn't a business case. If someone tried succeeded, no one would notice. If someone tried and broke something, it'd look bad. So, status quo...

a day ago

junon

Compression is a performance killer for intra-DC communication. You typically avoid it at all costs when doing RPC within the same AZ/DC.

Thus, the only thing you can do after that to improve performance is to reduce bytes on the wire.

a day ago

imtringued

Okay so you just use plain JSON again...

In a discussion about using messagepack that doesn't really sound like messagepack is winning.

12 hours ago

junon

A 20% reduction of bytes on the wire is not losing.

99.9999999% of the data sent across the network in that Datacenter will never be read by humans directly. Why put it into a textual form?

10 hours ago

bythreads

In a system that requires the absolute speediest throughput compression is actually usually the worst thing in a parsechain - so parsing without first decompression is valuable.

I've tried messagepack a few times, but to be honest the hassle of the debugging was never really worth it

a day ago

dns_snek

That's a bad benchmark, it doesn't show what type of data is being encoded or the programming language.

Encoding/Decoding an array of strings in Javascript is going to have a completely different performance profile than Encoding/Decoding an array of floats in a lower level language like C.

a day ago

crabmusket

I came here to say this. I did my own benchmarks on JSON versus msgpack. I shall have to reproduce and publish them one day.

If you have a lot of strings, or lots of objects where the total data in keys is similar to the total data in values, then msgpack doesn't help much.

But when you have arrays of floats (which some systems at my work have a lot of), and if you want to add a simple extension to make msgpack understand e.g. JavaScript's TypedArray family, you can get some very large speedups without much work.

a day ago

jauntywundrkind

CBOR is derived from MessagePack, and is used throughout BlueSky/AtProto.

Reports there are that JSON is still the speed champ, by a healthy measure. I am among many who seem to chalk this up to JSON encoding/decoding being fantastically well optimized, having many heavily-invested in very well tuned & optimized libraries available.

(Ed: well, https://github.com/djkoloski/rust_serialization_benchmark only shows rust options, but somewhat contradicting this. JSON encoding appears ~3x slower, when looking at best CBOR & JSON performances)

This article feels quite broad & high level, as a general introduction. With a couple graphs thrown in at the end to try to close the deal. The space savings I tend to think are probably reasonably representative regardless of the library being tested. But the speed is going to vary greatly, and this is one of very few examples I've seen where CBOR comes out faster. The article does not provide any information that I can see about what the test is, what data or libraries are being tested.

It is worth noting that CBOR has heightened interest very recently, as the recent Kubernetes 1.32 shipped an alpha feature to talk in CBOR. The very good library below has gotten some good attention. https://github.com/fxamacker/cbor

18 hours ago

janalsncm

In my experience protobuf was smaller than MessagePack. I even tried compressing both with zstd and protobuf was still smaller. On the other hand protobuf is a lot less flexible.

2 days ago

comonoid

MessagePack is self-describing (it contains tags like "next bytes are an integer"), but Protobuf uses external scheme.

a day ago

jabwd

You can decode protobuf in the same way, I've written several decoders in the past that don't rely on an external schema. There are some types you can't always decode with a 100% confidence, but then again JSON or something like it isn't strongly typed either.

a day ago

SigmundA

protobuf doesn't have key names without the schema right?

Wouldn't it just decode to 1,type,value 2,type,value without the schema no names?

Human readable key names is a big part of what makes a self describing format useful but also contributes to bloat a format with an embedded schema in the header would help.

a day ago

comonoid

Indeed.

a day ago

fshafique

Does it solve the problem of repeating set of keys in an object array, eg. when representing a table?

I don't think using a dictionary of key values is the way to go here. I think there should be a dedicated "table" type, where the column keys are only defined once, and not repeated for every single row.

a day ago

ubutler

MessagePack can encode rows as well and then you just need to manage linking the keys during deserialization. In fact, it can encode arbitrary binary without needing base64 like JSON.

a day ago

feverzsj

You can just use array of array like most scientific applications do.

a day ago

fshafique

I can do that with JSON too. I was hoping MessagePack would have built-in functionality to do it.

19 hours ago

karteum

I discovered JSON Binpack recently, which works either schemaless (like msgpack) or - supposedly more efficiently - with a schema. I haven't tried the codebase yet but it looks interesting.

https://jsonbinpack.sourcemeta.com/

2 days ago

sd9

It drops the most useful aspect of JSON, which is that you can open it in a text editor.

It's like JSON in that it's a serialisation format.

a day ago

PittleyDunkin

And here I was thinking serialization was the most useful aspect of json (and it's not even great at that)

a day ago

majewsky

Serialization is what it is, not why it is useful. The most useful aspect of JSON is undoubtedly wide support (historically, that every web browser carried a parser implementation from day 1; contemporarily, the availability as a library in nearly every programming language, often in std, plus shell accessibility through jq).

a day ago

PittleyDunkin

> Serialization is what it is, not why it is useful.

Is there a non-teleological manner in which to evaluate standards?

> The most useful aspect of JSON is undoubtedly wide support

This is a fantastic example of how widespread technology doesn't imply quality.

Don't get me wrong, I love JSON. It's a useful format with many implementations of varying quality. But it's also a major pain in the ass to deal with: encoding errors, syntax errors, no byte syntax, schemas are horribly implemented. It's used because it's popular, not because it has some particular benefit.

In fact, I'd argue JSON's largest benefit as opposed to competitive serializers has been to not give a fuck about the quality of (de)serialization. Who gives a fuck about the semantics of parsing a number when that's your problem?!?

a day ago

xcircle

In our C++ project, we use the nlohmann library for handling JSON data. When sending JSON over MQTT, we leverage a built-in function of nlohmann to serialize our JSON objects into the MessagePack format. You can simply call jsonObj.to_msgpack(). Similarly, the data can be decoded back easily.

10 hours ago

dallbee

A trick I often use to get the most out of messagepack is using array encoding of structs. Most msgp libraries have support for this.

This gives you a key benefit of protobuf, without needing external schema files: you don't have to pay the space of the keys of your data.

This is simply not something you can do with JSON, and depending on your data can yield substantial space savings.

20 hours ago

slurpyb

Shouts to msgspec - i havent had a project without it in awhile.

2 days ago

ubutler

+1 It’s almost as indispensable as tqdm for a data scientist at least.

a day ago

mdhb

CBOR: It’s like JSON but fast and small but also an official IETF standard.

https://cbor.io/

2 days ago

camgunz

Disclaimer: I wrote and maintain a MessagePack implementation.

CBOR is MessagePack. The story is that Carsten Bormann wanted to create an IETF standardized MP version, the creators asked him not to (after he acted in pretty bad faith), he forked off a version, added some very ill-advised tweaks, named it after himself, and submitted it anyway.

I wrote this up years ago (https://news.ycombinator.com/item?id=14072598), and since then the only thing they've addressed is undefined behavior when a decoder encounters an unknown simple value.

a day ago

mdhb

Interesting bit of context I wasn’t aware of. Thanks for that.

8 hours ago

nmadden

CBOR is basically a fork of MsgPack. I prefer the original - it’s simpler and there are more high-quality implementations available.

a day ago

jpc0

CBOR is actively used in the WebAuthN spec (passkeys) so browsers ship with en implementation... And if you intend to support it even via a library you will be shipping an implementation as well.

https://www.w3.org/TR/webauthn-2/#sctn-conforming-all-classe...

a day ago

camgunz

Disclaimer: I wrote and maintain a MessagePack implementation.

Reading through this, it looks like they toss out indefinite length values, "canonicalization", and tags, making it essentially MP (MP does have extension types, I should say).

https://fidoalliance.org/specs/fido-v2.0-ps-20190130/fido-cl...

a day ago

throw5959

Which Web API can encode and decode CBOR? I'm not aware of any, and unless I'm mistaken you will need to ship your own implementation in any case.

a day ago

feverzsj

It's too complex, and the implementation is poor[0].

[0]: https://github.com/getml/reflect-cpp/tree/main/benchmarks

a day ago

otabdeveloper4

CBOR is a standard, not an implementation.

As a standard it's almost exactly the same as MsgPack, the difference is mostly just that CBOR filled out underspecified parts of MsgPack. (Things like how extensions for custom types work, etc.)

a day ago

tjoff

Implementation is poor because of performance?

Performance is just one aspect, and using poor to describe it is very misleading. Say not performant if that is what you meant.

a day ago

mattbillenstein

I've built a few systems using msgpack-rpc - serves really well as a transport format in my experience!

2 days ago

eeasss

Serialziation vulnerabilities anyone

a day ago

h_tbob

I like how it makes binary easy. I wish there was a simple way to do binary in JSON.

a day ago

agieocean

Made an open image format with this for constrained networks and it works great

a day ago

vdqtp3

Ignorant question - is the relatively small size benefit worth another standard that's fairly opaque to troubleshooting and loses readability?

Is there a direct comparison of why someone should choose this over alternatives? 27 bytes down to 18 bytes (for their example) just doesn't seem like enough of a benefit. This clearly isn't targeted to me in either case, but for someone without much knowledge of the space, it seems like a solution in search of a problem.

2 days ago

willhbr

If you need a format that can transport byte arrays unmodified (image data, etc), msgpack (or protos or whatever) is much better than JSON since you don't have to base64 encode or escape the data. It also supports non-string keys which can be convenient.

2 days ago

diath

It's useful when dealing with high traffic networked services, the little saves here and there have compounding effects over time and save you a lot of bandwidth.

a day ago

div3rs3

I'd argue the value goes up with larger payloads. The tradeoff is ease of use vs efficiency.

2 days ago

2muchcoffeeman

I’m not sure why you wouldn’t just develop in json, then flick a switch to use binary.

2 days ago

palmfacehn

It starts to make sense if you are returning a large array of objects and each object contains several long values. Unfortunately it looks like msgpack doesn't support u128 or arbitrary precision big integers. I suppose you can always cast to byte[].

a day ago

goodpoint

MessagePack and CBOR allow zero-copy parsing.

a day ago

cryptonector

We already have CBOR and other binary JSONs.

2 days ago

jakewins

CBOR is a clone of msgpack: https://en.m.wikipedia.org/wiki/CBOR

a day ago

crabbone

From cursory reading of the specification, the format doesn't seem to offer anything groundbreaking, no particular benefits compared to other similar formats.

Whatever your messaging format is going to be, the performance will mostly depend on the application developer and their understanding of the specifics of the format. So, the 20% figure seems arbitrary.

In practical terms, I'd say: if you feel confident about dealing with binary formats and like fiddling with this side of your application, probably, making your own is the best way to go. If you don't like or don't know how to do that, then, probably, choosing the one that has the most mature and robust tools around it is the best option.

----

NB. It's also useful to remember that data transfer of the network is discrete, with the minimum chunk of information being MTU. So, for example, if most of the messages exchanged by the application were smaller than one MTU before attempting to optimize for size, then making these messages shorter will yield no tangible benefit. It's really only worth to start thinking about optimizations when a significant portion of the messages are measured in at least low double digits of MTUs, if we believe in the 20% figure.

It's a similar situation with the storage, which is also discrete, with the minimum chunks being one block. Similar reasoning applies here as well.

a day ago