summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore1
-rw-r--r--python-django-cacheops.spec747
-rw-r--r--sources1
3 files changed, 749 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..423da25 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/django-cacheops-6.2.tar.gz
diff --git a/python-django-cacheops.spec b/python-django-cacheops.spec
new file mode 100644
index 0000000..a8a080d
--- /dev/null
+++ b/python-django-cacheops.spec
@@ -0,0 +1,747 @@
+%global _empty_manifest_terminate_build 0
+Name: python-django-cacheops
+Version: 6.2
+Release: 1
+Summary: A slick ORM cache with automatic granular event-driven invalidation for Django.
+License: BSD
+URL: http://github.com/Suor/django-cacheops
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/17/56/997d73970d7a226b5a2982471f8c5ab822fbe05ea1bb694c6c5f099c8131/django-cacheops-6.2.tar.gz
+BuildArch: noarch
+
+
+%description
+A slick app that supports automatic or manual queryset caching and `automatic
+granular event-driven invalidation <http://suor.github.io/blog/2014/03/09/on-orm-cache-invalidation/>`_.
+It uses `redis <http://redis.io/>`_ as backend for ORM cache and redis or
+filesystem for simple time-invalidated one.
+And there is more to it:
+- decorators to cache any user function or view as a queryset or by time
+- extensions for django and jinja2 templates
+- transparent transaction support
+- dog-pile prevention mechanism
+- a couple of hacks to make django faster
+Requirements
+++++++++++++
+Python 3.5+, Django 2.1+ and Redis 4.0+.
+Installation
+++++++++++++
+Using pip:
+ $ pip install django-cacheops
+ # Or from github directly
+ $ pip install git+https://github.com/Suor/django-cacheops.git@master
+Setup
++++++
+Add ``cacheops`` to your ``INSTALLED_APPS``.
+Setup redis connection and enable caching for desired models:
+ CACHEOPS_REDIS = {
+ 'host': 'localhost', # redis-server is on same machine
+ 'port': 6379, # default redis port
+ 'db': 1, # SELECT non-default redis database
+ # using separate redis db or redis instance
+ # is highly recommended
+ 'socket_timeout': 3, # connection timeout in seconds, optional
+ 'password': '...', # optional
+ 'unix_socket_path': '' # replaces host and port
+ }
+ # Alternatively the redis connection can be defined using a URL:
+ CACHEOPS_REDIS = "redis://localhost:6379/1"
+ # or
+ CACHEOPS_REDIS = "unix://path/to/socket?db=1"
+ # or with password (note a colon)
+ CACHEOPS_REDIS = "redis://:password@localhost:6379/1"
+ # If you want to use sentinel, specify this variable
+ CACHEOPS_SENTINEL = {
+ 'locations': [('localhost', 26379)], # sentinel locations, required
+ 'service_name': 'mymaster', # sentinel service name, required
+ 'socket_timeout': 0.1, # connection timeout in seconds, optional
+ 'db': 0 # redis database, default: 0
+ }
+ # To use your own redis client class,
+ # should be compatible or subclass cacheops.redis.CacheopsRedis
+ CACHEOPS_CLIENT_CLASS = 'your.redis.ClientClass'
+ CACHEOPS = {
+ # Automatically cache any User.objects.get() calls for 15 minutes
+ # This also includes .first() and .last() calls,
+ # as well as request.user or post.author access,
+ # where Post.author is a foreign key to auth.User
+ 'auth.user': {'ops': 'get', 'timeout': 60*15},
+ # Automatically cache all gets and queryset fetches
+ # to other django.contrib.auth models for an hour
+ 'auth.*': {'ops': {'fetch', 'get'}, 'timeout': 60*60},
+ # Cache all queries to Permission
+ # 'all' is an alias for {'get', 'fetch', 'count', 'aggregate', 'exists'}
+ 'auth.permission': {'ops': 'all', 'timeout': 60*60},
+ # Enable manual caching on all other models with default timeout of an hour
+ # Use Post.objects.cache().get(...)
+ # or Tags.objects.filter(...).order_by(...).cache()
+ # to cache particular ORM request.
+ # Invalidation is still automatic
+ '*.*': {'ops': (), 'timeout': 60*60},
+ # And since ops is empty by default you can rewrite last line as:
+ '*.*': {'timeout': 60*60},
+ # NOTE: binding signals has its overhead, like preventing fast mass deletes,
+ # you might want to only register whatever you cache and dependencies.
+ # Finally you can explicitely forbid even manual caching with:
+ 'some_app.*': None,
+ }
+You can configure default profile setting with ``CACHEOPS_DEFAULTS``. This way you can rewrite the config above:
+ CACHEOPS_DEFAULTS = {
+ 'timeout': 60*60
+ }
+ CACHEOPS = {
+ 'auth.user': {'ops': 'get', 'timeout': 60*15},
+ 'auth.*': {'ops': ('fetch', 'get')},
+ 'auth.permission': {'ops': 'all'},
+ '*.*': {},
+ }
+Using ``'*.*'`` with non-empty ``ops`` is **not recommended**
+since it will easily cache something you don't intent to or even know about like migrations tables.
+The better approach will be restricting by app with ``'app_name.*'``.
+Besides ``ops`` and ``timeout`` options you can also use:
+``local_get: True``
+ To cache simple gets for this model in process local memory.
+ This is very fast, but is not invalidated in any way until process is restarted.
+ Still could be useful for extremely rarely changed things.
+``cache_on_save=True | 'field_name'``
+ To write an instance to cache upon save.
+ Cached instance will be retrieved on ``.get(field_name=...)`` request.
+ Setting to ``True`` causes caching by primary key.
+Additionally, you can tell cacheops to degrade gracefully on redis fail with:
+ CACHEOPS_DEGRADE_ON_FAILURE = True
+There is also a possibility to make all cacheops methods and decorators no-op, e.g. for testing:
+ from django.test import override_settings
+ @override_settings(CACHEOPS_ENABLED=False)
+ def test_something():
+ # ...
+ assert cond
+Usage
++++++
+| **Automatic caching**
+It's automatic you just need to set it up.
+| **Manual caching**
+You can force any queryset to use cache by calling its ``.cache()`` method:
+ Article.objects.filter(tag=2).cache()
+Here you can specify which ops should be cached for the queryset, for example, this code:
+ qs = Article.objects.filter(tag=2).cache(ops=['count'])
+ paginator = Paginator(objects, ipp)
+ articles = list(pager.page(page_num)) # hits database
+will cache count call in ``Paginator`` but not later articles fetch.
+There are five possible actions - ``get``, ``fetch``, ``count``, ``aggregate`` and ``exists``.
+You can pass any subset of this ops to ``.cache()`` method even empty - to turn off caching.
+There is, however, a shortcut for the latter:
+ qs = Article.objects.filter(visible=True).nocache()
+ qs1 = qs.filter(tag=2) # hits database
+ qs2 = qs.filter(category=3) # hits it once more
+It is useful when you want to disable automatic caching on particular queryset.
+You can also override default timeout for particular queryset with ``.cache(timeout=...)``.
+| **Function caching**
+You can cache and invalidate result of a function the same way as a queryset.
+Cached results of the next function will be invalidated on any ``Article`` change,
+addition or deletion:
+ from cacheops import cached_as
+ @cached_as(Article, timeout=120)
+ def article_stats():
+ return {
+ 'tags': list(Article.objects.values('tag').annotate(Count('id')))
+ 'categories': list(Article.objects.values('category').annotate(Count('id')))
+ }
+Note that we are using list on both querysets here, it's because we don't want
+to cache queryset objects but their results.
+Also note that if you want to filter queryset based on arguments,
+e.g. to make invalidation more granular, you can use a local function:
+ def articles_block(category, count=5):
+ qs = Article.objects.filter(category=category)
+ @cached_as(qs, extra=count)
+ def _articles_block():
+ articles = list(qs.filter(photo=True)[:count])
+ if len(articles) < count:
+ articles += list(qs.filter(photo=False)[:count-len(articles)])
+ return articles
+ return _articles_block()
+We added ``extra`` here to make different keys for calls with same ``category`` but different
+``count``. Cache key will also depend on function arguments, so we could just pass ``count`` as
+an argument to inner function. We also omitted ``timeout`` here, so a default for the model
+will be used.
+Another possibility is to make function cache invalidate on changes to any one of several models:
+ @cached_as(Article.objects.filter(public=True), Tag)
+ def article_stats():
+ return {...}
+As you can see, we can mix querysets and models here.
+| **View caching**
+You can also cache and invalidate a view as a queryset. This works mostly the same way as function
+caching, but only path of the request parameter is used to construct cache key:
+ from cacheops import cached_view_as
+ @cached_view_as(News)
+ def news_index(request):
+ # ...
+ return render(...)
+You can pass ``timeout``, ``extra`` and several samples the same way as to ``@cached_as()``. Note that you can pass a function as ``extra``:
+ @cached_view_as(News, extra=lambda req: req.user.is_staff)
+ def news_index(request):
+ # ... add extra things for staff
+ return render(...)
+A function passed as ``extra`` receives the same arguments as the cached function.
+Class based views can also be cached:
+ class NewsIndex(ListView):
+ model = News
+ news_index = cached_view_as(News, ...)(NewsIndex.as_view())
+Invalidation
+++++++++++++
+Cacheops uses both time and event-driven invalidation. The event-driven one
+listens on model signals and invalidates appropriate caches on ``Model.save()``, ``.delete()``
+and m2m changes.
+Invalidation tries to be granular which means it won't invalidate a queryset
+that cannot be influenced by added/updated/deleted object judging by query
+conditions. Most of the time this will do what you want, if it won't you can use
+one of the following:
+ from cacheops import invalidate_obj, invalidate_model, invalidate_all
+ invalidate_obj(some_article) # invalidates queries affected by some_article
+ invalidate_model(Article) # invalidates all queries for model
+ invalidate_all() # flush redis cache database
+And last there is ``invalidate`` command::
+ ./manage.py invalidate articles.Article.34 # same as invalidate_obj
+ ./manage.py invalidate articles.Article # same as invalidate_model
+ ./manage.py invalidate articles # invalidate all models in articles
+And the one that FLUSHES cacheops redis database::
+ ./manage.py invalidate all
+Don't use that if you share redis database for both cache and something else.
+| **Turning off and postponing invalidation**
+There is also a way to turn off invalidation for a while:
+ from cacheops import no_invalidation
+ with no_invalidation:
+ # ... do some changes
+ obj.save()
+Also works as decorator:
+ @no_invalidation
+ def some_work(...):
+ # ... do some changes
+ obj.save()
+Combined with ``try ... finally`` it could be used to postpone invalidation:
+ try:
+ with no_invalidation:
+ # ...
+ finally:
+ invalidate_obj(...)
+ # ... or
+ invalidate_model(...)
+Postponing invalidation can speed up batch jobs.
+| **Mass updates**
+Normally `qs.update(...)` doesn't emit any events and thus doesn't trigger invalidation.
+And there is no transparent and efficient way to do that: trying to act on conditions will
+invalidate too much if update conditions are orthogonal to many queries conditions,
+and to act on specific objects we will need to fetch all of them,
+which `QuerySet.update()` users generally try to avoid.
+In the case you actually want to perform the latter cacheops provides a shortcut:
+ qs.invalidated_update(...)
+Note that all the updated objects are fetched twice, prior and post the update.
+Components
+++++++++++
+
+%package -n python3-django-cacheops
+Summary: A slick ORM cache with automatic granular event-driven invalidation for Django.
+Provides: python-django-cacheops
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-django-cacheops
+A slick app that supports automatic or manual queryset caching and `automatic
+granular event-driven invalidation <http://suor.github.io/blog/2014/03/09/on-orm-cache-invalidation/>`_.
+It uses `redis <http://redis.io/>`_ as backend for ORM cache and redis or
+filesystem for simple time-invalidated one.
+And there is more to it:
+- decorators to cache any user function or view as a queryset or by time
+- extensions for django and jinja2 templates
+- transparent transaction support
+- dog-pile prevention mechanism
+- a couple of hacks to make django faster
+Requirements
+++++++++++++
+Python 3.5+, Django 2.1+ and Redis 4.0+.
+Installation
+++++++++++++
+Using pip:
+ $ pip install django-cacheops
+ # Or from github directly
+ $ pip install git+https://github.com/Suor/django-cacheops.git@master
+Setup
++++++
+Add ``cacheops`` to your ``INSTALLED_APPS``.
+Setup redis connection and enable caching for desired models:
+ CACHEOPS_REDIS = {
+ 'host': 'localhost', # redis-server is on same machine
+ 'port': 6379, # default redis port
+ 'db': 1, # SELECT non-default redis database
+ # using separate redis db or redis instance
+ # is highly recommended
+ 'socket_timeout': 3, # connection timeout in seconds, optional
+ 'password': '...', # optional
+ 'unix_socket_path': '' # replaces host and port
+ }
+ # Alternatively the redis connection can be defined using a URL:
+ CACHEOPS_REDIS = "redis://localhost:6379/1"
+ # or
+ CACHEOPS_REDIS = "unix://path/to/socket?db=1"
+ # or with password (note a colon)
+ CACHEOPS_REDIS = "redis://:password@localhost:6379/1"
+ # If you want to use sentinel, specify this variable
+ CACHEOPS_SENTINEL = {
+ 'locations': [('localhost', 26379)], # sentinel locations, required
+ 'service_name': 'mymaster', # sentinel service name, required
+ 'socket_timeout': 0.1, # connection timeout in seconds, optional
+ 'db': 0 # redis database, default: 0
+ }
+ # To use your own redis client class,
+ # should be compatible or subclass cacheops.redis.CacheopsRedis
+ CACHEOPS_CLIENT_CLASS = 'your.redis.ClientClass'
+ CACHEOPS = {
+ # Automatically cache any User.objects.get() calls for 15 minutes
+ # This also includes .first() and .last() calls,
+ # as well as request.user or post.author access,
+ # where Post.author is a foreign key to auth.User
+ 'auth.user': {'ops': 'get', 'timeout': 60*15},
+ # Automatically cache all gets and queryset fetches
+ # to other django.contrib.auth models for an hour
+ 'auth.*': {'ops': {'fetch', 'get'}, 'timeout': 60*60},
+ # Cache all queries to Permission
+ # 'all' is an alias for {'get', 'fetch', 'count', 'aggregate', 'exists'}
+ 'auth.permission': {'ops': 'all', 'timeout': 60*60},
+ # Enable manual caching on all other models with default timeout of an hour
+ # Use Post.objects.cache().get(...)
+ # or Tags.objects.filter(...).order_by(...).cache()
+ # to cache particular ORM request.
+ # Invalidation is still automatic
+ '*.*': {'ops': (), 'timeout': 60*60},
+ # And since ops is empty by default you can rewrite last line as:
+ '*.*': {'timeout': 60*60},
+ # NOTE: binding signals has its overhead, like preventing fast mass deletes,
+ # you might want to only register whatever you cache and dependencies.
+ # Finally you can explicitely forbid even manual caching with:
+ 'some_app.*': None,
+ }
+You can configure default profile setting with ``CACHEOPS_DEFAULTS``. This way you can rewrite the config above:
+ CACHEOPS_DEFAULTS = {
+ 'timeout': 60*60
+ }
+ CACHEOPS = {
+ 'auth.user': {'ops': 'get', 'timeout': 60*15},
+ 'auth.*': {'ops': ('fetch', 'get')},
+ 'auth.permission': {'ops': 'all'},
+ '*.*': {},
+ }
+Using ``'*.*'`` with non-empty ``ops`` is **not recommended**
+since it will easily cache something you don't intent to or even know about like migrations tables.
+The better approach will be restricting by app with ``'app_name.*'``.
+Besides ``ops`` and ``timeout`` options you can also use:
+``local_get: True``
+ To cache simple gets for this model in process local memory.
+ This is very fast, but is not invalidated in any way until process is restarted.
+ Still could be useful for extremely rarely changed things.
+``cache_on_save=True | 'field_name'``
+ To write an instance to cache upon save.
+ Cached instance will be retrieved on ``.get(field_name=...)`` request.
+ Setting to ``True`` causes caching by primary key.
+Additionally, you can tell cacheops to degrade gracefully on redis fail with:
+ CACHEOPS_DEGRADE_ON_FAILURE = True
+There is also a possibility to make all cacheops methods and decorators no-op, e.g. for testing:
+ from django.test import override_settings
+ @override_settings(CACHEOPS_ENABLED=False)
+ def test_something():
+ # ...
+ assert cond
+Usage
++++++
+| **Automatic caching**
+It's automatic you just need to set it up.
+| **Manual caching**
+You can force any queryset to use cache by calling its ``.cache()`` method:
+ Article.objects.filter(tag=2).cache()
+Here you can specify which ops should be cached for the queryset, for example, this code:
+ qs = Article.objects.filter(tag=2).cache(ops=['count'])
+ paginator = Paginator(objects, ipp)
+ articles = list(pager.page(page_num)) # hits database
+will cache count call in ``Paginator`` but not later articles fetch.
+There are five possible actions - ``get``, ``fetch``, ``count``, ``aggregate`` and ``exists``.
+You can pass any subset of this ops to ``.cache()`` method even empty - to turn off caching.
+There is, however, a shortcut for the latter:
+ qs = Article.objects.filter(visible=True).nocache()
+ qs1 = qs.filter(tag=2) # hits database
+ qs2 = qs.filter(category=3) # hits it once more
+It is useful when you want to disable automatic caching on particular queryset.
+You can also override default timeout for particular queryset with ``.cache(timeout=...)``.
+| **Function caching**
+You can cache and invalidate result of a function the same way as a queryset.
+Cached results of the next function will be invalidated on any ``Article`` change,
+addition or deletion:
+ from cacheops import cached_as
+ @cached_as(Article, timeout=120)
+ def article_stats():
+ return {
+ 'tags': list(Article.objects.values('tag').annotate(Count('id')))
+ 'categories': list(Article.objects.values('category').annotate(Count('id')))
+ }
+Note that we are using list on both querysets here, it's because we don't want
+to cache queryset objects but their results.
+Also note that if you want to filter queryset based on arguments,
+e.g. to make invalidation more granular, you can use a local function:
+ def articles_block(category, count=5):
+ qs = Article.objects.filter(category=category)
+ @cached_as(qs, extra=count)
+ def _articles_block():
+ articles = list(qs.filter(photo=True)[:count])
+ if len(articles) < count:
+ articles += list(qs.filter(photo=False)[:count-len(articles)])
+ return articles
+ return _articles_block()
+We added ``extra`` here to make different keys for calls with same ``category`` but different
+``count``. Cache key will also depend on function arguments, so we could just pass ``count`` as
+an argument to inner function. We also omitted ``timeout`` here, so a default for the model
+will be used.
+Another possibility is to make function cache invalidate on changes to any one of several models:
+ @cached_as(Article.objects.filter(public=True), Tag)
+ def article_stats():
+ return {...}
+As you can see, we can mix querysets and models here.
+| **View caching**
+You can also cache and invalidate a view as a queryset. This works mostly the same way as function
+caching, but only path of the request parameter is used to construct cache key:
+ from cacheops import cached_view_as
+ @cached_view_as(News)
+ def news_index(request):
+ # ...
+ return render(...)
+You can pass ``timeout``, ``extra`` and several samples the same way as to ``@cached_as()``. Note that you can pass a function as ``extra``:
+ @cached_view_as(News, extra=lambda req: req.user.is_staff)
+ def news_index(request):
+ # ... add extra things for staff
+ return render(...)
+A function passed as ``extra`` receives the same arguments as the cached function.
+Class based views can also be cached:
+ class NewsIndex(ListView):
+ model = News
+ news_index = cached_view_as(News, ...)(NewsIndex.as_view())
+Invalidation
+++++++++++++
+Cacheops uses both time and event-driven invalidation. The event-driven one
+listens on model signals and invalidates appropriate caches on ``Model.save()``, ``.delete()``
+and m2m changes.
+Invalidation tries to be granular which means it won't invalidate a queryset
+that cannot be influenced by added/updated/deleted object judging by query
+conditions. Most of the time this will do what you want, if it won't you can use
+one of the following:
+ from cacheops import invalidate_obj, invalidate_model, invalidate_all
+ invalidate_obj(some_article) # invalidates queries affected by some_article
+ invalidate_model(Article) # invalidates all queries for model
+ invalidate_all() # flush redis cache database
+And last there is ``invalidate`` command::
+ ./manage.py invalidate articles.Article.34 # same as invalidate_obj
+ ./manage.py invalidate articles.Article # same as invalidate_model
+ ./manage.py invalidate articles # invalidate all models in articles
+And the one that FLUSHES cacheops redis database::
+ ./manage.py invalidate all
+Don't use that if you share redis database for both cache and something else.
+| **Turning off and postponing invalidation**
+There is also a way to turn off invalidation for a while:
+ from cacheops import no_invalidation
+ with no_invalidation:
+ # ... do some changes
+ obj.save()
+Also works as decorator:
+ @no_invalidation
+ def some_work(...):
+ # ... do some changes
+ obj.save()
+Combined with ``try ... finally`` it could be used to postpone invalidation:
+ try:
+ with no_invalidation:
+ # ...
+ finally:
+ invalidate_obj(...)
+ # ... or
+ invalidate_model(...)
+Postponing invalidation can speed up batch jobs.
+| **Mass updates**
+Normally `qs.update(...)` doesn't emit any events and thus doesn't trigger invalidation.
+And there is no transparent and efficient way to do that: trying to act on conditions will
+invalidate too much if update conditions are orthogonal to many queries conditions,
+and to act on specific objects we will need to fetch all of them,
+which `QuerySet.update()` users generally try to avoid.
+In the case you actually want to perform the latter cacheops provides a shortcut:
+ qs.invalidated_update(...)
+Note that all the updated objects are fetched twice, prior and post the update.
+Components
+++++++++++
+
+%package help
+Summary: Development documents and examples for django-cacheops
+Provides: python3-django-cacheops-doc
+%description help
+A slick app that supports automatic or manual queryset caching and `automatic
+granular event-driven invalidation <http://suor.github.io/blog/2014/03/09/on-orm-cache-invalidation/>`_.
+It uses `redis <http://redis.io/>`_ as backend for ORM cache and redis or
+filesystem for simple time-invalidated one.
+And there is more to it:
+- decorators to cache any user function or view as a queryset or by time
+- extensions for django and jinja2 templates
+- transparent transaction support
+- dog-pile prevention mechanism
+- a couple of hacks to make django faster
+Requirements
+++++++++++++
+Python 3.5+, Django 2.1+ and Redis 4.0+.
+Installation
+++++++++++++
+Using pip:
+ $ pip install django-cacheops
+ # Or from github directly
+ $ pip install git+https://github.com/Suor/django-cacheops.git@master
+Setup
++++++
+Add ``cacheops`` to your ``INSTALLED_APPS``.
+Setup redis connection and enable caching for desired models:
+ CACHEOPS_REDIS = {
+ 'host': 'localhost', # redis-server is on same machine
+ 'port': 6379, # default redis port
+ 'db': 1, # SELECT non-default redis database
+ # using separate redis db or redis instance
+ # is highly recommended
+ 'socket_timeout': 3, # connection timeout in seconds, optional
+ 'password': '...', # optional
+ 'unix_socket_path': '' # replaces host and port
+ }
+ # Alternatively the redis connection can be defined using a URL:
+ CACHEOPS_REDIS = "redis://localhost:6379/1"
+ # or
+ CACHEOPS_REDIS = "unix://path/to/socket?db=1"
+ # or with password (note a colon)
+ CACHEOPS_REDIS = "redis://:password@localhost:6379/1"
+ # If you want to use sentinel, specify this variable
+ CACHEOPS_SENTINEL = {
+ 'locations': [('localhost', 26379)], # sentinel locations, required
+ 'service_name': 'mymaster', # sentinel service name, required
+ 'socket_timeout': 0.1, # connection timeout in seconds, optional
+ 'db': 0 # redis database, default: 0
+ }
+ # To use your own redis client class,
+ # should be compatible or subclass cacheops.redis.CacheopsRedis
+ CACHEOPS_CLIENT_CLASS = 'your.redis.ClientClass'
+ CACHEOPS = {
+ # Automatically cache any User.objects.get() calls for 15 minutes
+ # This also includes .first() and .last() calls,
+ # as well as request.user or post.author access,
+ # where Post.author is a foreign key to auth.User
+ 'auth.user': {'ops': 'get', 'timeout': 60*15},
+ # Automatically cache all gets and queryset fetches
+ # to other django.contrib.auth models for an hour
+ 'auth.*': {'ops': {'fetch', 'get'}, 'timeout': 60*60},
+ # Cache all queries to Permission
+ # 'all' is an alias for {'get', 'fetch', 'count', 'aggregate', 'exists'}
+ 'auth.permission': {'ops': 'all', 'timeout': 60*60},
+ # Enable manual caching on all other models with default timeout of an hour
+ # Use Post.objects.cache().get(...)
+ # or Tags.objects.filter(...).order_by(...).cache()
+ # to cache particular ORM request.
+ # Invalidation is still automatic
+ '*.*': {'ops': (), 'timeout': 60*60},
+ # And since ops is empty by default you can rewrite last line as:
+ '*.*': {'timeout': 60*60},
+ # NOTE: binding signals has its overhead, like preventing fast mass deletes,
+ # you might want to only register whatever you cache and dependencies.
+ # Finally you can explicitely forbid even manual caching with:
+ 'some_app.*': None,
+ }
+You can configure default profile setting with ``CACHEOPS_DEFAULTS``. This way you can rewrite the config above:
+ CACHEOPS_DEFAULTS = {
+ 'timeout': 60*60
+ }
+ CACHEOPS = {
+ 'auth.user': {'ops': 'get', 'timeout': 60*15},
+ 'auth.*': {'ops': ('fetch', 'get')},
+ 'auth.permission': {'ops': 'all'},
+ '*.*': {},
+ }
+Using ``'*.*'`` with non-empty ``ops`` is **not recommended**
+since it will easily cache something you don't intent to or even know about like migrations tables.
+The better approach will be restricting by app with ``'app_name.*'``.
+Besides ``ops`` and ``timeout`` options you can also use:
+``local_get: True``
+ To cache simple gets for this model in process local memory.
+ This is very fast, but is not invalidated in any way until process is restarted.
+ Still could be useful for extremely rarely changed things.
+``cache_on_save=True | 'field_name'``
+ To write an instance to cache upon save.
+ Cached instance will be retrieved on ``.get(field_name=...)`` request.
+ Setting to ``True`` causes caching by primary key.
+Additionally, you can tell cacheops to degrade gracefully on redis fail with:
+ CACHEOPS_DEGRADE_ON_FAILURE = True
+There is also a possibility to make all cacheops methods and decorators no-op, e.g. for testing:
+ from django.test import override_settings
+ @override_settings(CACHEOPS_ENABLED=False)
+ def test_something():
+ # ...
+ assert cond
+Usage
++++++
+| **Automatic caching**
+It's automatic you just need to set it up.
+| **Manual caching**
+You can force any queryset to use cache by calling its ``.cache()`` method:
+ Article.objects.filter(tag=2).cache()
+Here you can specify which ops should be cached for the queryset, for example, this code:
+ qs = Article.objects.filter(tag=2).cache(ops=['count'])
+ paginator = Paginator(objects, ipp)
+ articles = list(pager.page(page_num)) # hits database
+will cache count call in ``Paginator`` but not later articles fetch.
+There are five possible actions - ``get``, ``fetch``, ``count``, ``aggregate`` and ``exists``.
+You can pass any subset of this ops to ``.cache()`` method even empty - to turn off caching.
+There is, however, a shortcut for the latter:
+ qs = Article.objects.filter(visible=True).nocache()
+ qs1 = qs.filter(tag=2) # hits database
+ qs2 = qs.filter(category=3) # hits it once more
+It is useful when you want to disable automatic caching on particular queryset.
+You can also override default timeout for particular queryset with ``.cache(timeout=...)``.
+| **Function caching**
+You can cache and invalidate result of a function the same way as a queryset.
+Cached results of the next function will be invalidated on any ``Article`` change,
+addition or deletion:
+ from cacheops import cached_as
+ @cached_as(Article, timeout=120)
+ def article_stats():
+ return {
+ 'tags': list(Article.objects.values('tag').annotate(Count('id')))
+ 'categories': list(Article.objects.values('category').annotate(Count('id')))
+ }
+Note that we are using list on both querysets here, it's because we don't want
+to cache queryset objects but their results.
+Also note that if you want to filter queryset based on arguments,
+e.g. to make invalidation more granular, you can use a local function:
+ def articles_block(category, count=5):
+ qs = Article.objects.filter(category=category)
+ @cached_as(qs, extra=count)
+ def _articles_block():
+ articles = list(qs.filter(photo=True)[:count])
+ if len(articles) < count:
+ articles += list(qs.filter(photo=False)[:count-len(articles)])
+ return articles
+ return _articles_block()
+We added ``extra`` here to make different keys for calls with same ``category`` but different
+``count``. Cache key will also depend on function arguments, so we could just pass ``count`` as
+an argument to inner function. We also omitted ``timeout`` here, so a default for the model
+will be used.
+Another possibility is to make function cache invalidate on changes to any one of several models:
+ @cached_as(Article.objects.filter(public=True), Tag)
+ def article_stats():
+ return {...}
+As you can see, we can mix querysets and models here.
+| **View caching**
+You can also cache and invalidate a view as a queryset. This works mostly the same way as function
+caching, but only path of the request parameter is used to construct cache key:
+ from cacheops import cached_view_as
+ @cached_view_as(News)
+ def news_index(request):
+ # ...
+ return render(...)
+You can pass ``timeout``, ``extra`` and several samples the same way as to ``@cached_as()``. Note that you can pass a function as ``extra``:
+ @cached_view_as(News, extra=lambda req: req.user.is_staff)
+ def news_index(request):
+ # ... add extra things for staff
+ return render(...)
+A function passed as ``extra`` receives the same arguments as the cached function.
+Class based views can also be cached:
+ class NewsIndex(ListView):
+ model = News
+ news_index = cached_view_as(News, ...)(NewsIndex.as_view())
+Invalidation
+++++++++++++
+Cacheops uses both time and event-driven invalidation. The event-driven one
+listens on model signals and invalidates appropriate caches on ``Model.save()``, ``.delete()``
+and m2m changes.
+Invalidation tries to be granular which means it won't invalidate a queryset
+that cannot be influenced by added/updated/deleted object judging by query
+conditions. Most of the time this will do what you want, if it won't you can use
+one of the following:
+ from cacheops import invalidate_obj, invalidate_model, invalidate_all
+ invalidate_obj(some_article) # invalidates queries affected by some_article
+ invalidate_model(Article) # invalidates all queries for model
+ invalidate_all() # flush redis cache database
+And last there is ``invalidate`` command::
+ ./manage.py invalidate articles.Article.34 # same as invalidate_obj
+ ./manage.py invalidate articles.Article # same as invalidate_model
+ ./manage.py invalidate articles # invalidate all models in articles
+And the one that FLUSHES cacheops redis database::
+ ./manage.py invalidate all
+Don't use that if you share redis database for both cache and something else.
+| **Turning off and postponing invalidation**
+There is also a way to turn off invalidation for a while:
+ from cacheops import no_invalidation
+ with no_invalidation:
+ # ... do some changes
+ obj.save()
+Also works as decorator:
+ @no_invalidation
+ def some_work(...):
+ # ... do some changes
+ obj.save()
+Combined with ``try ... finally`` it could be used to postpone invalidation:
+ try:
+ with no_invalidation:
+ # ...
+ finally:
+ invalidate_obj(...)
+ # ... or
+ invalidate_model(...)
+Postponing invalidation can speed up batch jobs.
+| **Mass updates**
+Normally `qs.update(...)` doesn't emit any events and thus doesn't trigger invalidation.
+And there is no transparent and efficient way to do that: trying to act on conditions will
+invalidate too much if update conditions are orthogonal to many queries conditions,
+and to act on specific objects we will need to fetch all of them,
+which `QuerySet.update()` users generally try to avoid.
+In the case you actually want to perform the latter cacheops provides a shortcut:
+ qs.invalidated_update(...)
+Note that all the updated objects are fetched twice, prior and post the update.
+Components
+++++++++++
+
+%prep
+%autosetup -n django-cacheops-6.2
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-django-cacheops -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Thu Mar 09 2023 Python_Bot <Python_Bot@openeuler.org> - 6.2-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..2668be7
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+080ae536ebb8112582eefee82838d0d3 django-cacheops-6.2.tar.gz