1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
|
%global _empty_manifest_terminate_build 0
Name: python-compresso
Version: 3.2.0
Release: 1
Summary: compresso algorithm variant based on work by Matejek et al.
License: License :: OSI Approved :: MIT License
URL: https://github.com/seung-lab/compresso
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/e3/b2/2206215bfc7a14b025198983ca6e7c9ee45762a33cd93e35a1575b629796/compresso-3.2.0.tar.gz
Requires: python3-click
Requires: python3-numpy
%description
# Compresso: Efficient Compression of Segmentation Data For Connectomics (PyPI edition)
[](https://badge.fury.io/py/compresso)
[](https://vcg.seas.harvard.edu/publications/compresso-efficient-compression-of-segmentation-data-for-connectomics)
[](http://www.miccai2017.org/schedule)

```python
import compresso
import numpy as np
labels = np.array(...)
compressed_labels = compresso.compress(labels) # 3d numpy array -> compressed bytes
reconstituted_labels = compresso.decompress(compressed_labels) # compressed bytes -> 3d numpy array
# Adds an index and modifies the stream to enable
# Random access to Z slices. Format Version 1.
compressed_labels = compresso.compress(labels, random_access_z_index=True)
reconstituted_labels = compresso.decompress(compressed_labels, z=3) # one z slice
reconstituted_labels = compresso.decompress(compressed_labels, z=(1,5)) # four slices
# A convenience object that simulates an array
# to efficiently extract image data
arr = compresso.CompressoArray(compressed_labels)
img = arr[:,:,1:5] # same four slices as above
# Returns header info as dict
# Has array dimensions and data width information.
header = compresso.header(compressed_labels)
# Extract the unique labels from a stream without
# decompressing to a full 3D array. Fast and low memory.
uniq_labels = compresso.labels(compressed_labels)
# Remap labels without decompressing. Could
# be useful for e.g. proofreading.
compressed_remapped = compresso.remap(
compressed_labels, { 1: 2, 2: 3, ... },
preserve_missing_labels=True
)
# Checks if the stream appears to be valid.
# This is a superficial check of headers.
is_valid = compresso.valid(stream)
```
```bash
# CLI compression of numpy data
# Compresso is designed to use a second stage compressor
# so use gzip, lzma, or others on the output file.
$ compresso data.npy # -> data.npy.cpso
$ compresso -d data.npy.cpso # -> data.npy
$ compresso --help
```
*NOTE: This is an extensive modification of the work by Matejek et al. which can be found here: https://github.com/VCG/compresso. It is not compatible with RhoANA streams.*
> Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600-2200x compression for label volumes, with running times suitable for practice.
**Paper**: Matejek _et al._, "Compresso: Efficient Compression of Segmentation Data For Connectomics", Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2017, 10-14. \[[CITE](https://scholar.google.com/scholar?q=Compresso%3A+Efficient+Compression+of+Segmentation+Data+For+Connectomics) | [PDF](https://vcg.seas.harvard.edu/publications/compresso-efficient-compression-of-segmentation-data-for-connectomics/paper)\]
In more concrete but simple terms, compresso represents the boundary between segments as a boolean bit packed field. Long runs of zeros are run length encoded. The 4-connected components within that field are mapped to a corresponding label. Boundary voxels are decoded with reference to their neighbors, or if that fails, by storing their label. A second stage of compression is then applied, such as gzip or lzma. There's a few more details but that's a reasonable overview.
## Setup
Requires Python 3.6+
```bash
pip install compresso
```
## Versions
| Major Version | Format Version | Description |
|---------------|----------------|----------------------------------------------------------------|
| 1 | - | Initial Release. Not usable due to bugs. No format versioning. |
| 2 | 0 | First major release. |
| 3 | 0,1 | Introduces random access to z slices in format version 1. |
## Compresso Stream Format
| Section | Bytes | Description |
|-----------|-------------------------------------------|-----------------------------------------------------------------------------------------------------------------|
| Header | 36 | Metadata incl. length of fields. |
| ids | header.data_width * header.id_size | Map of CCL regions to labels. |
| values | window_size * header.value_size | Values of renumbered windows. Bitfields describing boundaries. |
| locations | header.data_width * header.locations_size | Sequence of 7 control codes and labels + 7 that describe how to decode indeterminate locations in the boundary. |
| windows | The rest of the stream. | Sequence of numbers to be remapped from values. Describes the boundary structure of labels. |
| z_index | (optional tail) 2 * width * header.sz | Offsets into label values and locations to enable random access to slices. Format Version 1. |
`window_size` is the smallest data type that will contain `xstep * ystep * zstep`. For example, `steps=(4,4,1)` uses uint16 while `steps=(8,8,1)` uses uint64.
The byte width of the `z_index` is the smallest unsigned integer type that will contain `2 * sx * sy`.
## Codec Changes
The original codec has been updated and is no longer compatible with the original. Below are the important changes we made that differ from the code published alongside the paper.
Implementation wise, we also fixed up several bugs, added guards against data corruption, did some performance tuning, and made sure that the entire codec is implemented in C++ and called by Python. Thus, the codec is usable in both C++ and Python as well as any languages, such as Web Assembly, that C++ can be transpiled to.
Thank you to the original authors for publishing your code and algorithm from which this repo is derived.
### Updated Header
The previous header was 72 bytes. We updated the header to be only 35 bytes. It now includes the magic number `cpso`, a version number, and the data width of the labels.
This additional information makes detecting valid compresso streams easier, allows for updating the format in the future, and allows us to assume smaller byte widths than 64-bit.
| Attribute | Value | Type | Description |
|-------------------|-------------------|---------|-------------------------------------------------|
| magic | cpso | char[4] | File magic number. |
| format_version | 0 or 1 | u8 | Version of the compresso stream. |
| data_width | 1,2,4,or 8 | u8 | Size of the labels in bytes. |
| sx, sy, sz | >= 0 | u16 x 3 | Size of array dimensions. |
| xstep,ystep,zstep | 0 < product <= 64 | u8 x 3 | Size of structure grid. |
| id_size | >= 0 | u64 | Size of array mapping of CCL regions to labels. |
| value_size | >= 0 | u32 | Size of array mapping windows to renumbering. |
| location_size | >= 0 | u64 | Size of indeterminate locations array. |
| connectivity | 4 or 6 | u8 | Connectivity for connected components. |
### Char Byte Stream
The previous implementation treated the byte stream as uniform u64 little endian. We now emit the encoded stream as `unsigned char` and write each appropriate data type in little endian.
### Variable Data Widths
The labels may assume any unsigned integer data width, which reduces the size of the ids and locations stream when appropriate. The encoded boundaries are reduced to the smallest size that fits. A 4x4x1 window is represented with u16, an 8x8x1 with u64. Less commonly used, but a 4x4x2 would be represented with u32, and a 4x2x1 would get a u8.
*Note that at this time only 4x4x1 and 8x8x1 are supported in this implementation, but the protocol will make those assumptions.*
### Supports Full Integer Range in Indeterminate Locations
The previous codec reserved 6 integers for instructions in the locations stream, but this meant that six segmentation labels were not representable. We added a seventh reserved instruction that indicates the next byte in the stream is the label and then we can use the full range of the integer to represent that number.
This potentially expands the size of the compressed stream. However, we only use this instruction for non-representable numbers, so for most data it should cause zero increase and minimal increase so long as the non-representable numbers in indeterminate locations are rare. The upside is compresso now handles all possible inputs.
### Supports 4 and 6 Connected Components
6-connected CCL seems like it would be a win because it would reduce the number of duplicated IDs that need to be stored. However, in an experiment we found that it did significantly decrease IDs, but at the expense of adding many more boundary voxels (since you need to consider the Z direction now) and increasing the number of indeterminate locations far more. It ended up being slower and larger on some connectomics segmentation we experimented with.
However, we suspect that there are some images where 6 would do better. An obvious example is a solid
color image that has no boundaries. The images where 6 shines will probably have sparser and straighter boundaries so that fewer additional boundary voxels are introduced.
### Random Access to Z-Slices (Format Version 1)
We make two changes to the codec in order to allow random access to Z slices. First, we disable indeterminate location codes 4 and 5 which refer to other slices, making each slice independently decodable. We also add a tail of size `2 * index_width * sz` which contains unsigned 8, 16, 32, or 64 bit offsets into the labels and locations streams for each slice which are arranged as all the labels then all the locations (they are not interleaved). The streams are difference coded to reduce the magnitude of the integers. The byte width is determined by the smallest unsigned integer type that will be able to represent 2 * sx * sy which is a coarse upper bound for the locations stream.
The overall impact of this change is a slight increase in the size of the compresso stream and a possible impact on the compressibility if the vertical references were heavily used, such as on a checkerboard type image.
This feature can be disabled by setting `compress(..., random_access_z_index=False)` which will emit a format version 0 stream. When this feature is enabled, it sets the format version to 1. This implementation can encode and decode both format versions.
This feature is not supported when `connectivity=6` due to the required interdependence of the slices.
### Results From the Paper
**Compression Performance**

Compression ratios of general-purpose compression methods combined with Compresso and Neuroglancer. Compresso paired with LZMA yields the best compression ratios for all connectomics datasets (left) and in average (four out of five) for the others (right).
%package -n python3-compresso
Summary: compresso algorithm variant based on work by Matejek et al.
Provides: python-compresso
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
BuildRequires: python3-cffi
BuildRequires: gcc
BuildRequires: gdb
%description -n python3-compresso
# Compresso: Efficient Compression of Segmentation Data For Connectomics (PyPI edition)
[](https://badge.fury.io/py/compresso)
[](https://vcg.seas.harvard.edu/publications/compresso-efficient-compression-of-segmentation-data-for-connectomics)
[](http://www.miccai2017.org/schedule)

```python
import compresso
import numpy as np
labels = np.array(...)
compressed_labels = compresso.compress(labels) # 3d numpy array -> compressed bytes
reconstituted_labels = compresso.decompress(compressed_labels) # compressed bytes -> 3d numpy array
# Adds an index and modifies the stream to enable
# Random access to Z slices. Format Version 1.
compressed_labels = compresso.compress(labels, random_access_z_index=True)
reconstituted_labels = compresso.decompress(compressed_labels, z=3) # one z slice
reconstituted_labels = compresso.decompress(compressed_labels, z=(1,5)) # four slices
# A convenience object that simulates an array
# to efficiently extract image data
arr = compresso.CompressoArray(compressed_labels)
img = arr[:,:,1:5] # same four slices as above
# Returns header info as dict
# Has array dimensions and data width information.
header = compresso.header(compressed_labels)
# Extract the unique labels from a stream without
# decompressing to a full 3D array. Fast and low memory.
uniq_labels = compresso.labels(compressed_labels)
# Remap labels without decompressing. Could
# be useful for e.g. proofreading.
compressed_remapped = compresso.remap(
compressed_labels, { 1: 2, 2: 3, ... },
preserve_missing_labels=True
)
# Checks if the stream appears to be valid.
# This is a superficial check of headers.
is_valid = compresso.valid(stream)
```
```bash
# CLI compression of numpy data
# Compresso is designed to use a second stage compressor
# so use gzip, lzma, or others on the output file.
$ compresso data.npy # -> data.npy.cpso
$ compresso -d data.npy.cpso # -> data.npy
$ compresso --help
```
*NOTE: This is an extensive modification of the work by Matejek et al. which can be found here: https://github.com/VCG/compresso. It is not compatible with RhoANA streams.*
> Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600-2200x compression for label volumes, with running times suitable for practice.
**Paper**: Matejek _et al._, "Compresso: Efficient Compression of Segmentation Data For Connectomics", Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2017, 10-14. \[[CITE](https://scholar.google.com/scholar?q=Compresso%3A+Efficient+Compression+of+Segmentation+Data+For+Connectomics) | [PDF](https://vcg.seas.harvard.edu/publications/compresso-efficient-compression-of-segmentation-data-for-connectomics/paper)\]
In more concrete but simple terms, compresso represents the boundary between segments as a boolean bit packed field. Long runs of zeros are run length encoded. The 4-connected components within that field are mapped to a corresponding label. Boundary voxels are decoded with reference to their neighbors, or if that fails, by storing their label. A second stage of compression is then applied, such as gzip or lzma. There's a few more details but that's a reasonable overview.
## Setup
Requires Python 3.6+
```bash
pip install compresso
```
## Versions
| Major Version | Format Version | Description |
|---------------|----------------|----------------------------------------------------------------|
| 1 | - | Initial Release. Not usable due to bugs. No format versioning. |
| 2 | 0 | First major release. |
| 3 | 0,1 | Introduces random access to z slices in format version 1. |
## Compresso Stream Format
| Section | Bytes | Description |
|-----------|-------------------------------------------|-----------------------------------------------------------------------------------------------------------------|
| Header | 36 | Metadata incl. length of fields. |
| ids | header.data_width * header.id_size | Map of CCL regions to labels. |
| values | window_size * header.value_size | Values of renumbered windows. Bitfields describing boundaries. |
| locations | header.data_width * header.locations_size | Sequence of 7 control codes and labels + 7 that describe how to decode indeterminate locations in the boundary. |
| windows | The rest of the stream. | Sequence of numbers to be remapped from values. Describes the boundary structure of labels. |
| z_index | (optional tail) 2 * width * header.sz | Offsets into label values and locations to enable random access to slices. Format Version 1. |
`window_size` is the smallest data type that will contain `xstep * ystep * zstep`. For example, `steps=(4,4,1)` uses uint16 while `steps=(8,8,1)` uses uint64.
The byte width of the `z_index` is the smallest unsigned integer type that will contain `2 * sx * sy`.
## Codec Changes
The original codec has been updated and is no longer compatible with the original. Below are the important changes we made that differ from the code published alongside the paper.
Implementation wise, we also fixed up several bugs, added guards against data corruption, did some performance tuning, and made sure that the entire codec is implemented in C++ and called by Python. Thus, the codec is usable in both C++ and Python as well as any languages, such as Web Assembly, that C++ can be transpiled to.
Thank you to the original authors for publishing your code and algorithm from which this repo is derived.
### Updated Header
The previous header was 72 bytes. We updated the header to be only 35 bytes. It now includes the magic number `cpso`, a version number, and the data width of the labels.
This additional information makes detecting valid compresso streams easier, allows for updating the format in the future, and allows us to assume smaller byte widths than 64-bit.
| Attribute | Value | Type | Description |
|-------------------|-------------------|---------|-------------------------------------------------|
| magic | cpso | char[4] | File magic number. |
| format_version | 0 or 1 | u8 | Version of the compresso stream. |
| data_width | 1,2,4,or 8 | u8 | Size of the labels in bytes. |
| sx, sy, sz | >= 0 | u16 x 3 | Size of array dimensions. |
| xstep,ystep,zstep | 0 < product <= 64 | u8 x 3 | Size of structure grid. |
| id_size | >= 0 | u64 | Size of array mapping of CCL regions to labels. |
| value_size | >= 0 | u32 | Size of array mapping windows to renumbering. |
| location_size | >= 0 | u64 | Size of indeterminate locations array. |
| connectivity | 4 or 6 | u8 | Connectivity for connected components. |
### Char Byte Stream
The previous implementation treated the byte stream as uniform u64 little endian. We now emit the encoded stream as `unsigned char` and write each appropriate data type in little endian.
### Variable Data Widths
The labels may assume any unsigned integer data width, which reduces the size of the ids and locations stream when appropriate. The encoded boundaries are reduced to the smallest size that fits. A 4x4x1 window is represented with u16, an 8x8x1 with u64. Less commonly used, but a 4x4x2 would be represented with u32, and a 4x2x1 would get a u8.
*Note that at this time only 4x4x1 and 8x8x1 are supported in this implementation, but the protocol will make those assumptions.*
### Supports Full Integer Range in Indeterminate Locations
The previous codec reserved 6 integers for instructions in the locations stream, but this meant that six segmentation labels were not representable. We added a seventh reserved instruction that indicates the next byte in the stream is the label and then we can use the full range of the integer to represent that number.
This potentially expands the size of the compressed stream. However, we only use this instruction for non-representable numbers, so for most data it should cause zero increase and minimal increase so long as the non-representable numbers in indeterminate locations are rare. The upside is compresso now handles all possible inputs.
### Supports 4 and 6 Connected Components
6-connected CCL seems like it would be a win because it would reduce the number of duplicated IDs that need to be stored. However, in an experiment we found that it did significantly decrease IDs, but at the expense of adding many more boundary voxels (since you need to consider the Z direction now) and increasing the number of indeterminate locations far more. It ended up being slower and larger on some connectomics segmentation we experimented with.
However, we suspect that there are some images where 6 would do better. An obvious example is a solid
color image that has no boundaries. The images where 6 shines will probably have sparser and straighter boundaries so that fewer additional boundary voxels are introduced.
### Random Access to Z-Slices (Format Version 1)
We make two changes to the codec in order to allow random access to Z slices. First, we disable indeterminate location codes 4 and 5 which refer to other slices, making each slice independently decodable. We also add a tail of size `2 * index_width * sz` which contains unsigned 8, 16, 32, or 64 bit offsets into the labels and locations streams for each slice which are arranged as all the labels then all the locations (they are not interleaved). The streams are difference coded to reduce the magnitude of the integers. The byte width is determined by the smallest unsigned integer type that will be able to represent 2 * sx * sy which is a coarse upper bound for the locations stream.
The overall impact of this change is a slight increase in the size of the compresso stream and a possible impact on the compressibility if the vertical references were heavily used, such as on a checkerboard type image.
This feature can be disabled by setting `compress(..., random_access_z_index=False)` which will emit a format version 0 stream. When this feature is enabled, it sets the format version to 1. This implementation can encode and decode both format versions.
This feature is not supported when `connectivity=6` due to the required interdependence of the slices.
### Results From the Paper
**Compression Performance**

Compression ratios of general-purpose compression methods combined with Compresso and Neuroglancer. Compresso paired with LZMA yields the best compression ratios for all connectomics datasets (left) and in average (four out of five) for the others (right).
%package help
Summary: Development documents and examples for compresso
Provides: python3-compresso-doc
%description help
# Compresso: Efficient Compression of Segmentation Data For Connectomics (PyPI edition)
[](https://badge.fury.io/py/compresso)
[](https://vcg.seas.harvard.edu/publications/compresso-efficient-compression-of-segmentation-data-for-connectomics)
[](http://www.miccai2017.org/schedule)

```python
import compresso
import numpy as np
labels = np.array(...)
compressed_labels = compresso.compress(labels) # 3d numpy array -> compressed bytes
reconstituted_labels = compresso.decompress(compressed_labels) # compressed bytes -> 3d numpy array
# Adds an index and modifies the stream to enable
# Random access to Z slices. Format Version 1.
compressed_labels = compresso.compress(labels, random_access_z_index=True)
reconstituted_labels = compresso.decompress(compressed_labels, z=3) # one z slice
reconstituted_labels = compresso.decompress(compressed_labels, z=(1,5)) # four slices
# A convenience object that simulates an array
# to efficiently extract image data
arr = compresso.CompressoArray(compressed_labels)
img = arr[:,:,1:5] # same four slices as above
# Returns header info as dict
# Has array dimensions and data width information.
header = compresso.header(compressed_labels)
# Extract the unique labels from a stream without
# decompressing to a full 3D array. Fast and low memory.
uniq_labels = compresso.labels(compressed_labels)
# Remap labels without decompressing. Could
# be useful for e.g. proofreading.
compressed_remapped = compresso.remap(
compressed_labels, { 1: 2, 2: 3, ... },
preserve_missing_labels=True
)
# Checks if the stream appears to be valid.
# This is a superficial check of headers.
is_valid = compresso.valid(stream)
```
```bash
# CLI compression of numpy data
# Compresso is designed to use a second stage compressor
# so use gzip, lzma, or others on the output file.
$ compresso data.npy # -> data.npy.cpso
$ compresso -d data.npy.cpso # -> data.npy
$ compresso --help
```
*NOTE: This is an extensive modification of the work by Matejek et al. which can be found here: https://github.com/VCG/compresso. It is not compatible with RhoANA streams.*
> Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600-2200x compression for label volumes, with running times suitable for practice.
**Paper**: Matejek _et al._, "Compresso: Efficient Compression of Segmentation Data For Connectomics", Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2017, 10-14. \[[CITE](https://scholar.google.com/scholar?q=Compresso%3A+Efficient+Compression+of+Segmentation+Data+For+Connectomics) | [PDF](https://vcg.seas.harvard.edu/publications/compresso-efficient-compression-of-segmentation-data-for-connectomics/paper)\]
In more concrete but simple terms, compresso represents the boundary between segments as a boolean bit packed field. Long runs of zeros are run length encoded. The 4-connected components within that field are mapped to a corresponding label. Boundary voxels are decoded with reference to their neighbors, or if that fails, by storing their label. A second stage of compression is then applied, such as gzip or lzma. There's a few more details but that's a reasonable overview.
## Setup
Requires Python 3.6+
```bash
pip install compresso
```
## Versions
| Major Version | Format Version | Description |
|---------------|----------------|----------------------------------------------------------------|
| 1 | - | Initial Release. Not usable due to bugs. No format versioning. |
| 2 | 0 | First major release. |
| 3 | 0,1 | Introduces random access to z slices in format version 1. |
## Compresso Stream Format
| Section | Bytes | Description |
|-----------|-------------------------------------------|-----------------------------------------------------------------------------------------------------------------|
| Header | 36 | Metadata incl. length of fields. |
| ids | header.data_width * header.id_size | Map of CCL regions to labels. |
| values | window_size * header.value_size | Values of renumbered windows. Bitfields describing boundaries. |
| locations | header.data_width * header.locations_size | Sequence of 7 control codes and labels + 7 that describe how to decode indeterminate locations in the boundary. |
| windows | The rest of the stream. | Sequence of numbers to be remapped from values. Describes the boundary structure of labels. |
| z_index | (optional tail) 2 * width * header.sz | Offsets into label values and locations to enable random access to slices. Format Version 1. |
`window_size` is the smallest data type that will contain `xstep * ystep * zstep`. For example, `steps=(4,4,1)` uses uint16 while `steps=(8,8,1)` uses uint64.
The byte width of the `z_index` is the smallest unsigned integer type that will contain `2 * sx * sy`.
## Codec Changes
The original codec has been updated and is no longer compatible with the original. Below are the important changes we made that differ from the code published alongside the paper.
Implementation wise, we also fixed up several bugs, added guards against data corruption, did some performance tuning, and made sure that the entire codec is implemented in C++ and called by Python. Thus, the codec is usable in both C++ and Python as well as any languages, such as Web Assembly, that C++ can be transpiled to.
Thank you to the original authors for publishing your code and algorithm from which this repo is derived.
### Updated Header
The previous header was 72 bytes. We updated the header to be only 35 bytes. It now includes the magic number `cpso`, a version number, and the data width of the labels.
This additional information makes detecting valid compresso streams easier, allows for updating the format in the future, and allows us to assume smaller byte widths than 64-bit.
| Attribute | Value | Type | Description |
|-------------------|-------------------|---------|-------------------------------------------------|
| magic | cpso | char[4] | File magic number. |
| format_version | 0 or 1 | u8 | Version of the compresso stream. |
| data_width | 1,2,4,or 8 | u8 | Size of the labels in bytes. |
| sx, sy, sz | >= 0 | u16 x 3 | Size of array dimensions. |
| xstep,ystep,zstep | 0 < product <= 64 | u8 x 3 | Size of structure grid. |
| id_size | >= 0 | u64 | Size of array mapping of CCL regions to labels. |
| value_size | >= 0 | u32 | Size of array mapping windows to renumbering. |
| location_size | >= 0 | u64 | Size of indeterminate locations array. |
| connectivity | 4 or 6 | u8 | Connectivity for connected components. |
### Char Byte Stream
The previous implementation treated the byte stream as uniform u64 little endian. We now emit the encoded stream as `unsigned char` and write each appropriate data type in little endian.
### Variable Data Widths
The labels may assume any unsigned integer data width, which reduces the size of the ids and locations stream when appropriate. The encoded boundaries are reduced to the smallest size that fits. A 4x4x1 window is represented with u16, an 8x8x1 with u64. Less commonly used, but a 4x4x2 would be represented with u32, and a 4x2x1 would get a u8.
*Note that at this time only 4x4x1 and 8x8x1 are supported in this implementation, but the protocol will make those assumptions.*
### Supports Full Integer Range in Indeterminate Locations
The previous codec reserved 6 integers for instructions in the locations stream, but this meant that six segmentation labels were not representable. We added a seventh reserved instruction that indicates the next byte in the stream is the label and then we can use the full range of the integer to represent that number.
This potentially expands the size of the compressed stream. However, we only use this instruction for non-representable numbers, so for most data it should cause zero increase and minimal increase so long as the non-representable numbers in indeterminate locations are rare. The upside is compresso now handles all possible inputs.
### Supports 4 and 6 Connected Components
6-connected CCL seems like it would be a win because it would reduce the number of duplicated IDs that need to be stored. However, in an experiment we found that it did significantly decrease IDs, but at the expense of adding many more boundary voxels (since you need to consider the Z direction now) and increasing the number of indeterminate locations far more. It ended up being slower and larger on some connectomics segmentation we experimented with.
However, we suspect that there are some images where 6 would do better. An obvious example is a solid
color image that has no boundaries. The images where 6 shines will probably have sparser and straighter boundaries so that fewer additional boundary voxels are introduced.
### Random Access to Z-Slices (Format Version 1)
We make two changes to the codec in order to allow random access to Z slices. First, we disable indeterminate location codes 4 and 5 which refer to other slices, making each slice independently decodable. We also add a tail of size `2 * index_width * sz` which contains unsigned 8, 16, 32, or 64 bit offsets into the labels and locations streams for each slice which are arranged as all the labels then all the locations (they are not interleaved). The streams are difference coded to reduce the magnitude of the integers. The byte width is determined by the smallest unsigned integer type that will be able to represent 2 * sx * sy which is a coarse upper bound for the locations stream.
The overall impact of this change is a slight increase in the size of the compresso stream and a possible impact on the compressibility if the vertical references were heavily used, such as on a checkerboard type image.
This feature can be disabled by setting `compress(..., random_access_z_index=False)` which will emit a format version 0 stream. When this feature is enabled, it sets the format version to 1. This implementation can encode and decode both format versions.
This feature is not supported when `connectivity=6` due to the required interdependence of the slices.
### Results From the Paper
**Compression Performance**

Compression ratios of general-purpose compression methods combined with Compresso and Neuroglancer. Compresso paired with LZMA yields the best compression ratios for all connectomics datasets (left) and in average (four out of five) for the others (right).
%prep
%autosetup -n compresso-3.2.0
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-compresso -f filelist.lst
%dir %{python3_sitearch}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 3.2.0-1
- Package Spec generated
|