1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
|
%global _empty_manifest_terminate_build 0
Name: python-loss-landscapes
Version: 3.0.6
Release: 1
Summary: A library for approximating loss landscapes in low-dimensional parameter subspaces
License: MIT
URL: https://github.com/marcellodebernardi/loss-landscapes
Source0: https://mirrors.aliyun.com/pypi/web/packages/66/d0/549d0011eb045d57f2ff0f5cca3eb8ee9b7c0943f0e687a648acbb5a9a91/loss_landscapes-3.0.6.tar.gz
BuildArch: noarch
Requires: python3-numpy
%description
# loss-landscapes
`loss-landscapes` is a PyTorch library for approximating neural network loss functions, and other related metrics,
in low-dimensional subspaces of the model's parameter space. The library makes the production of visualizations
such as those seen in [Visualizing the Loss Landscape of Neural Nets](https://arxiv.org/abs/1712.09913v3) much
easier, aiding the analysis of the geometry of neural network loss landscapes.
This library does not provide plotting facilities, letting the user define how the data should be plotted. Other
deep learning frameworks are not supported, though a TensorFlow version, `loss-landscapes-tf`, is planned for
a future release.
**NOTE: this library is in early development. Bugs are virtually a certainty, and the API is volatile. Do not use
this library in production code. For prototyping and research, always use the newest version of the library.**
## 1. What is a Loss Landscape?
Let `L : Parameters -> Real Numbers` be a loss function, which maps a point in the model parameter space to a
real number. For a neural network with `n` parameters, the loss function `L` takes an `n`-dimensional input. We
can define the loss landscape as the set of all `n+1`-dimensional points `(param, L(param))`, for all points
`param` in the parameter space. For example, the image below, reproduced from the paper by Li et al (2018), link
above, provides a visual representation of what a loss function over a two-dimensional parameter space might look
like:
<p align="center"><img src="/img/loss-landscape.png" width="60%" align="middle"/></p>
Of course, real machine learning models have a number of parameters much greater than 2, so the parameter space of
the model is virtually never two-dimensional. Because we can't print visualizations in more than two dimensions,
we cannot hope to visualize the "true" shape of the loss landscape. Instead, a number of techniques
exist for reducing the parameter space to one or two dimensions, ranging from dimensionality reduction techniques
like PCA, to restricting ourselves to a particular subspace of the overall parameter space. For more details,
read Li et al's paper.
## 2. Base Example: Supervised Loss in Parameter Subspaces
The simplest use case for `loss-landscapes` is to estimate the value of a supervised loss function in a subspace
of a neural network's parameter space. The subspace in question may be a point, a line, or a plane (these subspaces
can be meaningfully visualized). Suppose the user has trained a supervised learning model, of type `torch.nn.Module`,
on a dataset consisting of samples `X` and labels `y`, by minimizing some loss function. The user now wishes to
produce a surface plot alike to the one in section 1.
This is accomplished as follows:
````python
metric = Loss(loss_function, X, y)
landscape = random_plane(model, metric, normalize='filter')
````
As seen in the example above, the two core concepts in `loss-landscapes` are _metrics_ and _parameter subspaces_. The
latter define the section of parameter space to be considered, while the former define what quantity is evaluated at
each considered point in parameter space, and how it is computed. In the example above, we define a `Loss` metric
over data `X` and labels `y`, and instruct `loss_landscape` to evaluate it in a randomly generated planar subspace.
This would return a 2-dimensional array of loss values, which the user can plot in any desirable way. Example
visualizations the user might use this type of data for are shown below.
<p align="center"><img src="/img/loss-contour.png" width="75%" align="middle"/></p>
<p align="center"><img src="/img/loss-contour-3d.png" width="75%" align="middle"/></p>
Check the `examples` directory for `jupyter` notebooks with more in-depth examples of what is possible.
## 3. Metrics and Custom Metrics
The `loss-landscapes` library can compute any quantity of interest at a collection of points in a parameter subspace,
not just loss. This is accomplished using a `Metric`: a callable object which applies a pre-determined function,
such as a cross entropy loss with a specific set of inputs and outputs, to the model. The `loss_landscapes.model_metrics`
package contains a number of metrics that cover common use cases, such as `Loss` (evaluates a loss
function), `LossGradient` (evaluates the gradient of the loss w.r.t. the model parameters),
`PrincipalCurvatureEvaluator` (evaluates the principal curvatures of the loss function), and more.
Furthermore, the user can add custom metrics by subclassing `Metric`. As an example, consider the library
implementation of `Loss`, for `torch` models:
````python
class Metric(abc.ABC):
""" A quantity that can be computed given a model or an agent. """
def __init__(self):
super().__init__()
@abc.abstractmethod
def __call__(self, model_wrapper: ModelWrapper):
pass
class Loss(Metric):
""" Computes a specified loss function over specified input-output pairs. """
def __init__(self, loss_fn, inputs: torch.Tensor, target: torch.Tensor):
super().__init__()
self.loss_fn = loss_fn
self.inputs = inputs
self.target = target
def __call__(self, model_wrapper: ModelWrapper) -> float:
return self.loss_fn(model_wrapper.forward(self.inputs), self.target).item()
````
The user may create custom `Metric`s in a similar manner. One complication is that the `Metric` class'
`__call__` method is designed to take as input a `ModelWrapper` rather than a model. This class is internal
to the library and exists to facilitate the handling of the myriad of different models a user may pass as
inputs to a function such as `loss_landscapes.planar_interpolation()`. It is sufficient for the user to know
that a `ModelWrapper` is a callable object that can be used to call the model on a given input (see the `call_fn`
argument of the `ModelInterface` class in the next section). The class also provides a `get_model()` method
that exposes a reference to the underlying model, should the user wish to carry out more complicated operations
on it.
In summary, the `Metric` abstraction adds a great degree of flexibility. An metric defines what quantity
dependent on model parameters the user is interested in evaluating, and how to evaluate it. The user could define,
for example, a metric that computes an estimate of the expected return of a reinforcement learning agent.
## 4. More Complex Models
In the general case of a simple supervised learning model, as in the sections above, client code calls functions
such as `loss_landscapes.linear_interpolation` and passes as argument a PyTorch module of type `torch.nn.Module`.
For more complex cases, such as when the user wants to evaluate the loss landscape as a function of a subset of
the model parameters, or the expected return landscape for a RL agent, the user must specify to the `loss-landscapes`
library how to interface with the model (or the agent, on a more general level). This is accomplished using a
`ModelWrapper` object, which hides the implementation details of the model or agent. For general use, the library
supplies the `GeneralModelWrapper` in the `loss_landscapes.model_interface.model_wrapper` module.
Assume the user wishes to estimate the expected return of some RL agent which provides an `agent.act(observation)`
method for action selection. Then, the example from section 2 becomes as follows:
````python
metric = ExpectedReturnMetric(env, n_samples)
agent_wrapper = GeneralModelWrapper(agent, [agent.q_function, agent.policy], lambda agent, x: agent.act(x))
landscape = random_plane(agent_wrapper, metric, normalize='filter')
````
## 5. WIP: Connecting Paths, Saddle Points, and Trajectory Tracking
A number of features are currently under development, but as of yet incomplete.
A number of papers in recent years have shown that loss landscapes of neural networks are dominated by a
proliferation of saddle points, that good solutions are better described as large low-loss plateaus than as
"well-bottom" points, and that for sufficiently high-dimensional networks, a low-loss path in parameter space can
be found between almost any arbitrary pair of minima. In the future, the `loss-landscapes` library will feature
implementations of algorithms for finding such low-loss connecting paths in the loss landscape, as well as tools to
facilitate the study of saddle points.
Some sort of trajectory tracking features are also under consideration, though at the time it's unclear what this
should actually mean, as the optimization trajectory is implicitly tracked by the user's training loop. Any metric
along the optimization trajectory can be tracked with libraries such as [ignite](https://github.com/pytorch/ignite)
for PyTorch.
## 6. Support for Other DL Libraries
The `loss-landscapes` library was initially designed to be agnostic to the DL framework in use. However, with the
increasing number of use cases to cover it became obvious that maintaining the original library-agnostic design
was adding too much complexity to the code.
A TensorFlow version, `loss-landscapes-tf`, is planned for the future.
## 7. Installation and Use
The package is available on PyPI. Install using `pip install loss-landscapes`. To use the library, import as follows:
````python
import loss_landscapes
import loss_landscapes.metrics
````
%package -n python3-loss-landscapes
Summary: A library for approximating loss landscapes in low-dimensional parameter subspaces
Provides: python-loss-landscapes
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-loss-landscapes
# loss-landscapes
`loss-landscapes` is a PyTorch library for approximating neural network loss functions, and other related metrics,
in low-dimensional subspaces of the model's parameter space. The library makes the production of visualizations
such as those seen in [Visualizing the Loss Landscape of Neural Nets](https://arxiv.org/abs/1712.09913v3) much
easier, aiding the analysis of the geometry of neural network loss landscapes.
This library does not provide plotting facilities, letting the user define how the data should be plotted. Other
deep learning frameworks are not supported, though a TensorFlow version, `loss-landscapes-tf`, is planned for
a future release.
**NOTE: this library is in early development. Bugs are virtually a certainty, and the API is volatile. Do not use
this library in production code. For prototyping and research, always use the newest version of the library.**
## 1. What is a Loss Landscape?
Let `L : Parameters -> Real Numbers` be a loss function, which maps a point in the model parameter space to a
real number. For a neural network with `n` parameters, the loss function `L` takes an `n`-dimensional input. We
can define the loss landscape as the set of all `n+1`-dimensional points `(param, L(param))`, for all points
`param` in the parameter space. For example, the image below, reproduced from the paper by Li et al (2018), link
above, provides a visual representation of what a loss function over a two-dimensional parameter space might look
like:
<p align="center"><img src="/img/loss-landscape.png" width="60%" align="middle"/></p>
Of course, real machine learning models have a number of parameters much greater than 2, so the parameter space of
the model is virtually never two-dimensional. Because we can't print visualizations in more than two dimensions,
we cannot hope to visualize the "true" shape of the loss landscape. Instead, a number of techniques
exist for reducing the parameter space to one or two dimensions, ranging from dimensionality reduction techniques
like PCA, to restricting ourselves to a particular subspace of the overall parameter space. For more details,
read Li et al's paper.
## 2. Base Example: Supervised Loss in Parameter Subspaces
The simplest use case for `loss-landscapes` is to estimate the value of a supervised loss function in a subspace
of a neural network's parameter space. The subspace in question may be a point, a line, or a plane (these subspaces
can be meaningfully visualized). Suppose the user has trained a supervised learning model, of type `torch.nn.Module`,
on a dataset consisting of samples `X` and labels `y`, by minimizing some loss function. The user now wishes to
produce a surface plot alike to the one in section 1.
This is accomplished as follows:
````python
metric = Loss(loss_function, X, y)
landscape = random_plane(model, metric, normalize='filter')
````
As seen in the example above, the two core concepts in `loss-landscapes` are _metrics_ and _parameter subspaces_. The
latter define the section of parameter space to be considered, while the former define what quantity is evaluated at
each considered point in parameter space, and how it is computed. In the example above, we define a `Loss` metric
over data `X` and labels `y`, and instruct `loss_landscape` to evaluate it in a randomly generated planar subspace.
This would return a 2-dimensional array of loss values, which the user can plot in any desirable way. Example
visualizations the user might use this type of data for are shown below.
<p align="center"><img src="/img/loss-contour.png" width="75%" align="middle"/></p>
<p align="center"><img src="/img/loss-contour-3d.png" width="75%" align="middle"/></p>
Check the `examples` directory for `jupyter` notebooks with more in-depth examples of what is possible.
## 3. Metrics and Custom Metrics
The `loss-landscapes` library can compute any quantity of interest at a collection of points in a parameter subspace,
not just loss. This is accomplished using a `Metric`: a callable object which applies a pre-determined function,
such as a cross entropy loss with a specific set of inputs and outputs, to the model. The `loss_landscapes.model_metrics`
package contains a number of metrics that cover common use cases, such as `Loss` (evaluates a loss
function), `LossGradient` (evaluates the gradient of the loss w.r.t. the model parameters),
`PrincipalCurvatureEvaluator` (evaluates the principal curvatures of the loss function), and more.
Furthermore, the user can add custom metrics by subclassing `Metric`. As an example, consider the library
implementation of `Loss`, for `torch` models:
````python
class Metric(abc.ABC):
""" A quantity that can be computed given a model or an agent. """
def __init__(self):
super().__init__()
@abc.abstractmethod
def __call__(self, model_wrapper: ModelWrapper):
pass
class Loss(Metric):
""" Computes a specified loss function over specified input-output pairs. """
def __init__(self, loss_fn, inputs: torch.Tensor, target: torch.Tensor):
super().__init__()
self.loss_fn = loss_fn
self.inputs = inputs
self.target = target
def __call__(self, model_wrapper: ModelWrapper) -> float:
return self.loss_fn(model_wrapper.forward(self.inputs), self.target).item()
````
The user may create custom `Metric`s in a similar manner. One complication is that the `Metric` class'
`__call__` method is designed to take as input a `ModelWrapper` rather than a model. This class is internal
to the library and exists to facilitate the handling of the myriad of different models a user may pass as
inputs to a function such as `loss_landscapes.planar_interpolation()`. It is sufficient for the user to know
that a `ModelWrapper` is a callable object that can be used to call the model on a given input (see the `call_fn`
argument of the `ModelInterface` class in the next section). The class also provides a `get_model()` method
that exposes a reference to the underlying model, should the user wish to carry out more complicated operations
on it.
In summary, the `Metric` abstraction adds a great degree of flexibility. An metric defines what quantity
dependent on model parameters the user is interested in evaluating, and how to evaluate it. The user could define,
for example, a metric that computes an estimate of the expected return of a reinforcement learning agent.
## 4. More Complex Models
In the general case of a simple supervised learning model, as in the sections above, client code calls functions
such as `loss_landscapes.linear_interpolation` and passes as argument a PyTorch module of type `torch.nn.Module`.
For more complex cases, such as when the user wants to evaluate the loss landscape as a function of a subset of
the model parameters, or the expected return landscape for a RL agent, the user must specify to the `loss-landscapes`
library how to interface with the model (or the agent, on a more general level). This is accomplished using a
`ModelWrapper` object, which hides the implementation details of the model or agent. For general use, the library
supplies the `GeneralModelWrapper` in the `loss_landscapes.model_interface.model_wrapper` module.
Assume the user wishes to estimate the expected return of some RL agent which provides an `agent.act(observation)`
method for action selection. Then, the example from section 2 becomes as follows:
````python
metric = ExpectedReturnMetric(env, n_samples)
agent_wrapper = GeneralModelWrapper(agent, [agent.q_function, agent.policy], lambda agent, x: agent.act(x))
landscape = random_plane(agent_wrapper, metric, normalize='filter')
````
## 5. WIP: Connecting Paths, Saddle Points, and Trajectory Tracking
A number of features are currently under development, but as of yet incomplete.
A number of papers in recent years have shown that loss landscapes of neural networks are dominated by a
proliferation of saddle points, that good solutions are better described as large low-loss plateaus than as
"well-bottom" points, and that for sufficiently high-dimensional networks, a low-loss path in parameter space can
be found between almost any arbitrary pair of minima. In the future, the `loss-landscapes` library will feature
implementations of algorithms for finding such low-loss connecting paths in the loss landscape, as well as tools to
facilitate the study of saddle points.
Some sort of trajectory tracking features are also under consideration, though at the time it's unclear what this
should actually mean, as the optimization trajectory is implicitly tracked by the user's training loop. Any metric
along the optimization trajectory can be tracked with libraries such as [ignite](https://github.com/pytorch/ignite)
for PyTorch.
## 6. Support for Other DL Libraries
The `loss-landscapes` library was initially designed to be agnostic to the DL framework in use. However, with the
increasing number of use cases to cover it became obvious that maintaining the original library-agnostic design
was adding too much complexity to the code.
A TensorFlow version, `loss-landscapes-tf`, is planned for the future.
## 7. Installation and Use
The package is available on PyPI. Install using `pip install loss-landscapes`. To use the library, import as follows:
````python
import loss_landscapes
import loss_landscapes.metrics
````
%package help
Summary: Development documents and examples for loss-landscapes
Provides: python3-loss-landscapes-doc
%description help
# loss-landscapes
`loss-landscapes` is a PyTorch library for approximating neural network loss functions, and other related metrics,
in low-dimensional subspaces of the model's parameter space. The library makes the production of visualizations
such as those seen in [Visualizing the Loss Landscape of Neural Nets](https://arxiv.org/abs/1712.09913v3) much
easier, aiding the analysis of the geometry of neural network loss landscapes.
This library does not provide plotting facilities, letting the user define how the data should be plotted. Other
deep learning frameworks are not supported, though a TensorFlow version, `loss-landscapes-tf`, is planned for
a future release.
**NOTE: this library is in early development. Bugs are virtually a certainty, and the API is volatile. Do not use
this library in production code. For prototyping and research, always use the newest version of the library.**
## 1. What is a Loss Landscape?
Let `L : Parameters -> Real Numbers` be a loss function, which maps a point in the model parameter space to a
real number. For a neural network with `n` parameters, the loss function `L` takes an `n`-dimensional input. We
can define the loss landscape as the set of all `n+1`-dimensional points `(param, L(param))`, for all points
`param` in the parameter space. For example, the image below, reproduced from the paper by Li et al (2018), link
above, provides a visual representation of what a loss function over a two-dimensional parameter space might look
like:
<p align="center"><img src="/img/loss-landscape.png" width="60%" align="middle"/></p>
Of course, real machine learning models have a number of parameters much greater than 2, so the parameter space of
the model is virtually never two-dimensional. Because we can't print visualizations in more than two dimensions,
we cannot hope to visualize the "true" shape of the loss landscape. Instead, a number of techniques
exist for reducing the parameter space to one or two dimensions, ranging from dimensionality reduction techniques
like PCA, to restricting ourselves to a particular subspace of the overall parameter space. For more details,
read Li et al's paper.
## 2. Base Example: Supervised Loss in Parameter Subspaces
The simplest use case for `loss-landscapes` is to estimate the value of a supervised loss function in a subspace
of a neural network's parameter space. The subspace in question may be a point, a line, or a plane (these subspaces
can be meaningfully visualized). Suppose the user has trained a supervised learning model, of type `torch.nn.Module`,
on a dataset consisting of samples `X` and labels `y`, by minimizing some loss function. The user now wishes to
produce a surface plot alike to the one in section 1.
This is accomplished as follows:
````python
metric = Loss(loss_function, X, y)
landscape = random_plane(model, metric, normalize='filter')
````
As seen in the example above, the two core concepts in `loss-landscapes` are _metrics_ and _parameter subspaces_. The
latter define the section of parameter space to be considered, while the former define what quantity is evaluated at
each considered point in parameter space, and how it is computed. In the example above, we define a `Loss` metric
over data `X` and labels `y`, and instruct `loss_landscape` to evaluate it in a randomly generated planar subspace.
This would return a 2-dimensional array of loss values, which the user can plot in any desirable way. Example
visualizations the user might use this type of data for are shown below.
<p align="center"><img src="/img/loss-contour.png" width="75%" align="middle"/></p>
<p align="center"><img src="/img/loss-contour-3d.png" width="75%" align="middle"/></p>
Check the `examples` directory for `jupyter` notebooks with more in-depth examples of what is possible.
## 3. Metrics and Custom Metrics
The `loss-landscapes` library can compute any quantity of interest at a collection of points in a parameter subspace,
not just loss. This is accomplished using a `Metric`: a callable object which applies a pre-determined function,
such as a cross entropy loss with a specific set of inputs and outputs, to the model. The `loss_landscapes.model_metrics`
package contains a number of metrics that cover common use cases, such as `Loss` (evaluates a loss
function), `LossGradient` (evaluates the gradient of the loss w.r.t. the model parameters),
`PrincipalCurvatureEvaluator` (evaluates the principal curvatures of the loss function), and more.
Furthermore, the user can add custom metrics by subclassing `Metric`. As an example, consider the library
implementation of `Loss`, for `torch` models:
````python
class Metric(abc.ABC):
""" A quantity that can be computed given a model or an agent. """
def __init__(self):
super().__init__()
@abc.abstractmethod
def __call__(self, model_wrapper: ModelWrapper):
pass
class Loss(Metric):
""" Computes a specified loss function over specified input-output pairs. """
def __init__(self, loss_fn, inputs: torch.Tensor, target: torch.Tensor):
super().__init__()
self.loss_fn = loss_fn
self.inputs = inputs
self.target = target
def __call__(self, model_wrapper: ModelWrapper) -> float:
return self.loss_fn(model_wrapper.forward(self.inputs), self.target).item()
````
The user may create custom `Metric`s in a similar manner. One complication is that the `Metric` class'
`__call__` method is designed to take as input a `ModelWrapper` rather than a model. This class is internal
to the library and exists to facilitate the handling of the myriad of different models a user may pass as
inputs to a function such as `loss_landscapes.planar_interpolation()`. It is sufficient for the user to know
that a `ModelWrapper` is a callable object that can be used to call the model on a given input (see the `call_fn`
argument of the `ModelInterface` class in the next section). The class also provides a `get_model()` method
that exposes a reference to the underlying model, should the user wish to carry out more complicated operations
on it.
In summary, the `Metric` abstraction adds a great degree of flexibility. An metric defines what quantity
dependent on model parameters the user is interested in evaluating, and how to evaluate it. The user could define,
for example, a metric that computes an estimate of the expected return of a reinforcement learning agent.
## 4. More Complex Models
In the general case of a simple supervised learning model, as in the sections above, client code calls functions
such as `loss_landscapes.linear_interpolation` and passes as argument a PyTorch module of type `torch.nn.Module`.
For more complex cases, such as when the user wants to evaluate the loss landscape as a function of a subset of
the model parameters, or the expected return landscape for a RL agent, the user must specify to the `loss-landscapes`
library how to interface with the model (or the agent, on a more general level). This is accomplished using a
`ModelWrapper` object, which hides the implementation details of the model or agent. For general use, the library
supplies the `GeneralModelWrapper` in the `loss_landscapes.model_interface.model_wrapper` module.
Assume the user wishes to estimate the expected return of some RL agent which provides an `agent.act(observation)`
method for action selection. Then, the example from section 2 becomes as follows:
````python
metric = ExpectedReturnMetric(env, n_samples)
agent_wrapper = GeneralModelWrapper(agent, [agent.q_function, agent.policy], lambda agent, x: agent.act(x))
landscape = random_plane(agent_wrapper, metric, normalize='filter')
````
## 5. WIP: Connecting Paths, Saddle Points, and Trajectory Tracking
A number of features are currently under development, but as of yet incomplete.
A number of papers in recent years have shown that loss landscapes of neural networks are dominated by a
proliferation of saddle points, that good solutions are better described as large low-loss plateaus than as
"well-bottom" points, and that for sufficiently high-dimensional networks, a low-loss path in parameter space can
be found between almost any arbitrary pair of minima. In the future, the `loss-landscapes` library will feature
implementations of algorithms for finding such low-loss connecting paths in the loss landscape, as well as tools to
facilitate the study of saddle points.
Some sort of trajectory tracking features are also under consideration, though at the time it's unclear what this
should actually mean, as the optimization trajectory is implicitly tracked by the user's training loop. Any metric
along the optimization trajectory can be tracked with libraries such as [ignite](https://github.com/pytorch/ignite)
for PyTorch.
## 6. Support for Other DL Libraries
The `loss-landscapes` library was initially designed to be agnostic to the DL framework in use. However, with the
increasing number of use cases to cover it became obvious that maintaining the original library-agnostic design
was adding too much complexity to the code.
A TensorFlow version, `loss-landscapes-tf`, is planned for the future.
## 7. Installation and Use
The package is available on PyPI. Install using `pip install loss-landscapes`. To use the library, import as follows:
````python
import loss_landscapes
import loss_landscapes.metrics
````
%prep
%autosetup -n loss_landscapes-3.0.6
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-loss-landscapes -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Tue Jun 20 2023 Python_Bot <Python_Bot@openeuler.org> - 3.0.6-1
- Package Spec generated
|