1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
|
%global _empty_manifest_terminate_build 0
Name: python-museval
Version: 0.4.0
Release: 1
Summary: Evaluation tools for the SIGSEP MUS database
License: MIT
URL: https://github.com/sigsep/sigsep-mus-eval
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/68/7f/bc830da872a2f0c5e2bee707e218123e53324cd24b75f6b55ed7bbb79c08/museval-0.4.0.tar.gz
BuildArch: noarch
Requires: python3-musdb
Requires: python3-pandas
Requires: python3-numpy
Requires: python3-scipy
Requires: python3-simplejson
Requires: python3-soundfile
Requires: python3-jsonschema
Requires: python3-check-manifest
Requires: python3-sphinx
Requires: python3-sphinx-rtd-theme
Requires: python3-recommonmark
Requires: python3-numpydoc
Requires: python3-pytest
%description
# museval
[](https://github.com/sigsep/sigsep-mus-eval/actions?query=workflow%3ACI+branch%3Amaster+event%3Apush)
[](https://pypi.python.org/pypi/museval)
[](https://pypi.python.org/pypi/museval)
A python package to evaluate source separation results using the [MUSDB18](https://sigsep.github.io/musdb) dataset. This package was part of the [MUS task](https://sisec.inria.fr/home/2018-professionally-produced-music-recordings/) of the [Signal Separation Evaluation Campaign (SISEC)](https://sisec.inria.fr/).
### BSSEval v4
The BSSEval metrics, as implemented in the [MATLAB toolboxes](http://bass-db.gforge.inria.fr/bss_eval/) and their re-implementation in [mir_eval](http://craffel.github.io/mir_eval/#module-mir_eval.separation) are widely used in the audio separation literature. One particularity of BSSEval is to compute the metrics after optimally matching the estimates to the true sources through linear distortion filters. This allows the criteria to be robust to some linear mismatches. Apart from the optional evaluation for all possible permutations of the sources, this matching is the reason for most of the computation cost of BSSEval, especially considering it is done for each evaluation window when the metrics are computed on a framewise basis.
For this package, we enabled the option of having _time invariant_ distortion filters, instead of necessarily taking them as varying over time as done in the previous versions of BSS eval. First, enabling this option _significantly reduces_ the computational cost for evaluation because matching needs to be done only once for the whole signal. Second, it introduces much more dynamics in the evaluation, because time-varying matching filters turn out to over-estimate performance. Third, this makes matching more robust, because true sources are not silent throughout the whole recording, while they often were for short windows.
## Installation
### Package installation
You can install the `museval` parsing package using pip:
```bash
pip install museval
```
## Usage
The purpose of this package is to evaluate source separation results and write out validated `json` files. We want to encourage users to use this evaluation output format as the standardized way to share source separation results. `museval` is designed to work in conjuction with the [musdb](https://github.com/sigsep/sigsep-mus-db) tools and the MUSDB18 dataset (however, `museval` can also be used without `musdb`).
### Separate MUSDB18 tracks and Evaluate on-the-fly
- If you want to perform evaluation while processing your source separation results, you can make use `musdb` track objects.
Here is an example for such a function separating the mixture into a __vocals__ and __accompaniment__ track:
```python
import musdb
import museval
def estimate_and_evaluate(track):
# assume mix as estimates
estimates = {
'vocals': track.audio,
'accompaniment': track.audio
}
# Evaluate using museval
scores = museval.eval_mus_track(
track, estimates, output_dir="path/to/json"
)
# print nicely formatted and aggregated scores
print(scores)
mus = musdb.DB()
for track in mus:
estimate_and_evaluate(track)
```
Make sure `output_dir` is set. `museval` will recreate the `musdb` file structure in that folder and write the evaluation results to this folder.
### Evaluate MUSDB18 tracks later
If you have already computed your estimates, we provide you with an easy-to-use function to process evaluation results afterwards.
Simply use the `museval.eval_mus_dir` to evaluate your `estimates_dir` and write the results into the `output_dir`. For convenience, the `eval_mus_dir` function accepts all parameters of the `musdb.run()`.
```python
import musdb
import museval
# initiate musdb
mus = musdb.DB()
# evaluate an existing estimate folder with wav files
museval.eval_mus_dir(
dataset=mus, # instance of musdb
estimates_dir=..., # path to estimate folder
output_dir=..., # set a folder to write eval json files
subsets="test",
is_wav=False
)
```
### Aggregate and Analyze Scores
Scores for each track can also be aggregated in a pandas DataFrame for easier analysis or the creation of boxplots.
To aggregate multiple tracks in a DataFrame, create `museval.EvalStore()` object and add the track scores successively.
```python
results = museval.EvalStore(frames_agg='median', tracks_agg='median')
for track in tracks:
# ...
results.add_track(museval.eval_mus_track(track, estimates))
```
When all tracks have been added, the aggregated scores can be shown using `print(results)` and results may be saved as a pandas DataFrame `results.save('my_method.pandas')`.
To compare multiple methods, create a `museval.MethodStore()` object add the results
```python
methods = museval.MethodStore()
methods.add_evalstore(results, name="XZY")
```
To compare against participants from [SiSEC MUS 2018](https://github.com/sigsep/sigsep-mus-2018), we provide a convenient method to load the existing scores on demand using `methods.add_sisec18()`. For the creation of plots and statistical significance tests we refer to our [list of examples](/examples).
#### Commandline tool
We provide a command line wrapper of `eval_mus_dir` by calling the `museval` command line tool. The following example is equivalent to the code example above:
```
museval -p --musdb path/to/musdb -o path/to/output_dir path/to/estimate_dir
```
:bulb: you use the `--iswav` flag to use the decoded wav _musdb_ dataset.
### Using Docker for Evaluation
If you don't want to set up a Python environment to run the evaluation, we would recommend to use [Docker](http://docker.com). Assuming you have already computed your estimates and installed docker in your machine, you just need to run the following two lines in your terminal:
#### 1. Pull Docker Container
Pull our precompiled `sigsep-mus-eval` image from [dockerhub](https://hub.docker.com/r/faroit/sigsep-mus-eval/):
```
docker pull faroit/sigsep-mus-eval
```
#### 2. Run evaluation
To run the evaluation inside of the docker, three absolute paths are required:
* `estimatesdir` will stand here for the absolute path to the estimates directory. (For instance `/home/faroit/dev/mymethod/musdboutput`)
* `musdbdir` will stand here for the absolute path to the root folder of musdb. (For instance `/home/faroit/dev/data/musdb18`)
* `outputdir` will stand here for the absolute path to the output directory. (For instance `/home/faroit/dev/mymethod/scores`)
We just mount these directories into the docker container using the `-v` flags and start the docker instance:
```
docker run --rm -v estimatesdir:/est -v musdbdir:/mus -v outputdir:/out faroit/sigsep-mus-eval --musdb /mus -o /out /est
```
In the line above, replace `estimatesdir`, `musdbdir` and `outputdir` by the absolute paths for your setting. Please note that docker requires absolute paths so you have to rely on your command line environment to convert relative paths to absolute paths (e.g. by using `$HOME/` on Unix).
:warning: `museval` requires a significant amount of memory for the evaluation. Evaluating all five targets for _MUSDB18_ may require more than 4GB of RAM. If you use multiprocessing by using the `-p` switch in `museval`, this results in 16GB of RAM. It is recommended to adjust your Docker preferences, because the docker container might just quit if its out of memory.
## How to contribute
_museval_ is a community focused project, we therefore encourage the community to submit bug-fixes and requests for technical support through [github issues](https://github.com/sigsep/sigsep-mus-eval/issues/new). For more details of how to contribute, please follow our [`CONTRIBUTING.md`](CONTRIBUTING.md).
## References
A. If you use the `museval` in the context of source separation evaluation comparing a method it to other methods of [SiSEC 2018](http://sisec18.unmix.app/), please cite
```
@InProceedings{SiSEC18,
author="St{\"o}ter, Fabian-Robert and Liutkus, Antoine and Ito, Nobutaka",
title="The 2018 Signal Separation Evaluation Campaign",
booktitle="Latent Variable Analysis and Signal Separation:
14th International Conference, LVA/ICA 2018, Surrey, UK",
year="2018",
pages="293--305"
}
```
B. if you use the software for any other purpose, you can cite the software release
[](https://doi.org/10.5281/zenodo.3376621)
%package -n python3-museval
Summary: Evaluation tools for the SIGSEP MUS database
Provides: python-museval
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-museval
# museval
[](https://github.com/sigsep/sigsep-mus-eval/actions?query=workflow%3ACI+branch%3Amaster+event%3Apush)
[](https://pypi.python.org/pypi/museval)
[](https://pypi.python.org/pypi/museval)
A python package to evaluate source separation results using the [MUSDB18](https://sigsep.github.io/musdb) dataset. This package was part of the [MUS task](https://sisec.inria.fr/home/2018-professionally-produced-music-recordings/) of the [Signal Separation Evaluation Campaign (SISEC)](https://sisec.inria.fr/).
### BSSEval v4
The BSSEval metrics, as implemented in the [MATLAB toolboxes](http://bass-db.gforge.inria.fr/bss_eval/) and their re-implementation in [mir_eval](http://craffel.github.io/mir_eval/#module-mir_eval.separation) are widely used in the audio separation literature. One particularity of BSSEval is to compute the metrics after optimally matching the estimates to the true sources through linear distortion filters. This allows the criteria to be robust to some linear mismatches. Apart from the optional evaluation for all possible permutations of the sources, this matching is the reason for most of the computation cost of BSSEval, especially considering it is done for each evaluation window when the metrics are computed on a framewise basis.
For this package, we enabled the option of having _time invariant_ distortion filters, instead of necessarily taking them as varying over time as done in the previous versions of BSS eval. First, enabling this option _significantly reduces_ the computational cost for evaluation because matching needs to be done only once for the whole signal. Second, it introduces much more dynamics in the evaluation, because time-varying matching filters turn out to over-estimate performance. Third, this makes matching more robust, because true sources are not silent throughout the whole recording, while they often were for short windows.
## Installation
### Package installation
You can install the `museval` parsing package using pip:
```bash
pip install museval
```
## Usage
The purpose of this package is to evaluate source separation results and write out validated `json` files. We want to encourage users to use this evaluation output format as the standardized way to share source separation results. `museval` is designed to work in conjuction with the [musdb](https://github.com/sigsep/sigsep-mus-db) tools and the MUSDB18 dataset (however, `museval` can also be used without `musdb`).
### Separate MUSDB18 tracks and Evaluate on-the-fly
- If you want to perform evaluation while processing your source separation results, you can make use `musdb` track objects.
Here is an example for such a function separating the mixture into a __vocals__ and __accompaniment__ track:
```python
import musdb
import museval
def estimate_and_evaluate(track):
# assume mix as estimates
estimates = {
'vocals': track.audio,
'accompaniment': track.audio
}
# Evaluate using museval
scores = museval.eval_mus_track(
track, estimates, output_dir="path/to/json"
)
# print nicely formatted and aggregated scores
print(scores)
mus = musdb.DB()
for track in mus:
estimate_and_evaluate(track)
```
Make sure `output_dir` is set. `museval` will recreate the `musdb` file structure in that folder and write the evaluation results to this folder.
### Evaluate MUSDB18 tracks later
If you have already computed your estimates, we provide you with an easy-to-use function to process evaluation results afterwards.
Simply use the `museval.eval_mus_dir` to evaluate your `estimates_dir` and write the results into the `output_dir`. For convenience, the `eval_mus_dir` function accepts all parameters of the `musdb.run()`.
```python
import musdb
import museval
# initiate musdb
mus = musdb.DB()
# evaluate an existing estimate folder with wav files
museval.eval_mus_dir(
dataset=mus, # instance of musdb
estimates_dir=..., # path to estimate folder
output_dir=..., # set a folder to write eval json files
subsets="test",
is_wav=False
)
```
### Aggregate and Analyze Scores
Scores for each track can also be aggregated in a pandas DataFrame for easier analysis or the creation of boxplots.
To aggregate multiple tracks in a DataFrame, create `museval.EvalStore()` object and add the track scores successively.
```python
results = museval.EvalStore(frames_agg='median', tracks_agg='median')
for track in tracks:
# ...
results.add_track(museval.eval_mus_track(track, estimates))
```
When all tracks have been added, the aggregated scores can be shown using `print(results)` and results may be saved as a pandas DataFrame `results.save('my_method.pandas')`.
To compare multiple methods, create a `museval.MethodStore()` object add the results
```python
methods = museval.MethodStore()
methods.add_evalstore(results, name="XZY")
```
To compare against participants from [SiSEC MUS 2018](https://github.com/sigsep/sigsep-mus-2018), we provide a convenient method to load the existing scores on demand using `methods.add_sisec18()`. For the creation of plots and statistical significance tests we refer to our [list of examples](/examples).
#### Commandline tool
We provide a command line wrapper of `eval_mus_dir` by calling the `museval` command line tool. The following example is equivalent to the code example above:
```
museval -p --musdb path/to/musdb -o path/to/output_dir path/to/estimate_dir
```
:bulb: you use the `--iswav` flag to use the decoded wav _musdb_ dataset.
### Using Docker for Evaluation
If you don't want to set up a Python environment to run the evaluation, we would recommend to use [Docker](http://docker.com). Assuming you have already computed your estimates and installed docker in your machine, you just need to run the following two lines in your terminal:
#### 1. Pull Docker Container
Pull our precompiled `sigsep-mus-eval` image from [dockerhub](https://hub.docker.com/r/faroit/sigsep-mus-eval/):
```
docker pull faroit/sigsep-mus-eval
```
#### 2. Run evaluation
To run the evaluation inside of the docker, three absolute paths are required:
* `estimatesdir` will stand here for the absolute path to the estimates directory. (For instance `/home/faroit/dev/mymethod/musdboutput`)
* `musdbdir` will stand here for the absolute path to the root folder of musdb. (For instance `/home/faroit/dev/data/musdb18`)
* `outputdir` will stand here for the absolute path to the output directory. (For instance `/home/faroit/dev/mymethod/scores`)
We just mount these directories into the docker container using the `-v` flags and start the docker instance:
```
docker run --rm -v estimatesdir:/est -v musdbdir:/mus -v outputdir:/out faroit/sigsep-mus-eval --musdb /mus -o /out /est
```
In the line above, replace `estimatesdir`, `musdbdir` and `outputdir` by the absolute paths for your setting. Please note that docker requires absolute paths so you have to rely on your command line environment to convert relative paths to absolute paths (e.g. by using `$HOME/` on Unix).
:warning: `museval` requires a significant amount of memory for the evaluation. Evaluating all five targets for _MUSDB18_ may require more than 4GB of RAM. If you use multiprocessing by using the `-p` switch in `museval`, this results in 16GB of RAM. It is recommended to adjust your Docker preferences, because the docker container might just quit if its out of memory.
## How to contribute
_museval_ is a community focused project, we therefore encourage the community to submit bug-fixes and requests for technical support through [github issues](https://github.com/sigsep/sigsep-mus-eval/issues/new). For more details of how to contribute, please follow our [`CONTRIBUTING.md`](CONTRIBUTING.md).
## References
A. If you use the `museval` in the context of source separation evaluation comparing a method it to other methods of [SiSEC 2018](http://sisec18.unmix.app/), please cite
```
@InProceedings{SiSEC18,
author="St{\"o}ter, Fabian-Robert and Liutkus, Antoine and Ito, Nobutaka",
title="The 2018 Signal Separation Evaluation Campaign",
booktitle="Latent Variable Analysis and Signal Separation:
14th International Conference, LVA/ICA 2018, Surrey, UK",
year="2018",
pages="293--305"
}
```
B. if you use the software for any other purpose, you can cite the software release
[](https://doi.org/10.5281/zenodo.3376621)
%package help
Summary: Development documents and examples for museval
Provides: python3-museval-doc
%description help
# museval
[](https://github.com/sigsep/sigsep-mus-eval/actions?query=workflow%3ACI+branch%3Amaster+event%3Apush)
[](https://pypi.python.org/pypi/museval)
[](https://pypi.python.org/pypi/museval)
A python package to evaluate source separation results using the [MUSDB18](https://sigsep.github.io/musdb) dataset. This package was part of the [MUS task](https://sisec.inria.fr/home/2018-professionally-produced-music-recordings/) of the [Signal Separation Evaluation Campaign (SISEC)](https://sisec.inria.fr/).
### BSSEval v4
The BSSEval metrics, as implemented in the [MATLAB toolboxes](http://bass-db.gforge.inria.fr/bss_eval/) and their re-implementation in [mir_eval](http://craffel.github.io/mir_eval/#module-mir_eval.separation) are widely used in the audio separation literature. One particularity of BSSEval is to compute the metrics after optimally matching the estimates to the true sources through linear distortion filters. This allows the criteria to be robust to some linear mismatches. Apart from the optional evaluation for all possible permutations of the sources, this matching is the reason for most of the computation cost of BSSEval, especially considering it is done for each evaluation window when the metrics are computed on a framewise basis.
For this package, we enabled the option of having _time invariant_ distortion filters, instead of necessarily taking them as varying over time as done in the previous versions of BSS eval. First, enabling this option _significantly reduces_ the computational cost for evaluation because matching needs to be done only once for the whole signal. Second, it introduces much more dynamics in the evaluation, because time-varying matching filters turn out to over-estimate performance. Third, this makes matching more robust, because true sources are not silent throughout the whole recording, while they often were for short windows.
## Installation
### Package installation
You can install the `museval` parsing package using pip:
```bash
pip install museval
```
## Usage
The purpose of this package is to evaluate source separation results and write out validated `json` files. We want to encourage users to use this evaluation output format as the standardized way to share source separation results. `museval` is designed to work in conjuction with the [musdb](https://github.com/sigsep/sigsep-mus-db) tools and the MUSDB18 dataset (however, `museval` can also be used without `musdb`).
### Separate MUSDB18 tracks and Evaluate on-the-fly
- If you want to perform evaluation while processing your source separation results, you can make use `musdb` track objects.
Here is an example for such a function separating the mixture into a __vocals__ and __accompaniment__ track:
```python
import musdb
import museval
def estimate_and_evaluate(track):
# assume mix as estimates
estimates = {
'vocals': track.audio,
'accompaniment': track.audio
}
# Evaluate using museval
scores = museval.eval_mus_track(
track, estimates, output_dir="path/to/json"
)
# print nicely formatted and aggregated scores
print(scores)
mus = musdb.DB()
for track in mus:
estimate_and_evaluate(track)
```
Make sure `output_dir` is set. `museval` will recreate the `musdb` file structure in that folder and write the evaluation results to this folder.
### Evaluate MUSDB18 tracks later
If you have already computed your estimates, we provide you with an easy-to-use function to process evaluation results afterwards.
Simply use the `museval.eval_mus_dir` to evaluate your `estimates_dir` and write the results into the `output_dir`. For convenience, the `eval_mus_dir` function accepts all parameters of the `musdb.run()`.
```python
import musdb
import museval
# initiate musdb
mus = musdb.DB()
# evaluate an existing estimate folder with wav files
museval.eval_mus_dir(
dataset=mus, # instance of musdb
estimates_dir=..., # path to estimate folder
output_dir=..., # set a folder to write eval json files
subsets="test",
is_wav=False
)
```
### Aggregate and Analyze Scores
Scores for each track can also be aggregated in a pandas DataFrame for easier analysis or the creation of boxplots.
To aggregate multiple tracks in a DataFrame, create `museval.EvalStore()` object and add the track scores successively.
```python
results = museval.EvalStore(frames_agg='median', tracks_agg='median')
for track in tracks:
# ...
results.add_track(museval.eval_mus_track(track, estimates))
```
When all tracks have been added, the aggregated scores can be shown using `print(results)` and results may be saved as a pandas DataFrame `results.save('my_method.pandas')`.
To compare multiple methods, create a `museval.MethodStore()` object add the results
```python
methods = museval.MethodStore()
methods.add_evalstore(results, name="XZY")
```
To compare against participants from [SiSEC MUS 2018](https://github.com/sigsep/sigsep-mus-2018), we provide a convenient method to load the existing scores on demand using `methods.add_sisec18()`. For the creation of plots and statistical significance tests we refer to our [list of examples](/examples).
#### Commandline tool
We provide a command line wrapper of `eval_mus_dir` by calling the `museval` command line tool. The following example is equivalent to the code example above:
```
museval -p --musdb path/to/musdb -o path/to/output_dir path/to/estimate_dir
```
:bulb: you use the `--iswav` flag to use the decoded wav _musdb_ dataset.
### Using Docker for Evaluation
If you don't want to set up a Python environment to run the evaluation, we would recommend to use [Docker](http://docker.com). Assuming you have already computed your estimates and installed docker in your machine, you just need to run the following two lines in your terminal:
#### 1. Pull Docker Container
Pull our precompiled `sigsep-mus-eval` image from [dockerhub](https://hub.docker.com/r/faroit/sigsep-mus-eval/):
```
docker pull faroit/sigsep-mus-eval
```
#### 2. Run evaluation
To run the evaluation inside of the docker, three absolute paths are required:
* `estimatesdir` will stand here for the absolute path to the estimates directory. (For instance `/home/faroit/dev/mymethod/musdboutput`)
* `musdbdir` will stand here for the absolute path to the root folder of musdb. (For instance `/home/faroit/dev/data/musdb18`)
* `outputdir` will stand here for the absolute path to the output directory. (For instance `/home/faroit/dev/mymethod/scores`)
We just mount these directories into the docker container using the `-v` flags and start the docker instance:
```
docker run --rm -v estimatesdir:/est -v musdbdir:/mus -v outputdir:/out faroit/sigsep-mus-eval --musdb /mus -o /out /est
```
In the line above, replace `estimatesdir`, `musdbdir` and `outputdir` by the absolute paths for your setting. Please note that docker requires absolute paths so you have to rely on your command line environment to convert relative paths to absolute paths (e.g. by using `$HOME/` on Unix).
:warning: `museval` requires a significant amount of memory for the evaluation. Evaluating all five targets for _MUSDB18_ may require more than 4GB of RAM. If you use multiprocessing by using the `-p` switch in `museval`, this results in 16GB of RAM. It is recommended to adjust your Docker preferences, because the docker container might just quit if its out of memory.
## How to contribute
_museval_ is a community focused project, we therefore encourage the community to submit bug-fixes and requests for technical support through [github issues](https://github.com/sigsep/sigsep-mus-eval/issues/new). For more details of how to contribute, please follow our [`CONTRIBUTING.md`](CONTRIBUTING.md).
## References
A. If you use the `museval` in the context of source separation evaluation comparing a method it to other methods of [SiSEC 2018](http://sisec18.unmix.app/), please cite
```
@InProceedings{SiSEC18,
author="St{\"o}ter, Fabian-Robert and Liutkus, Antoine and Ito, Nobutaka",
title="The 2018 Signal Separation Evaluation Campaign",
booktitle="Latent Variable Analysis and Signal Separation:
14th International Conference, LVA/ICA 2018, Surrey, UK",
year="2018",
pages="293--305"
}
```
B. if you use the software for any other purpose, you can cite the software release
[](https://doi.org/10.5281/zenodo.3376621)
%prep
%autosetup -n museval-0.4.0
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-museval -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Tue Apr 25 2023 Python_Bot <Python_Bot@openeuler.org> - 0.4.0-1
- Package Spec generated
|