summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-06-20 04:04:20 +0000
committerCoprDistGit <infra@openeuler.org>2023-06-20 04:04:20 +0000
commit11ce86335916f1b8e5eb866e0183882aa146fb2c (patch)
treeb4c9274d5d4a4092757181cb61d200781e4a6ea5
parent3240496148c523ebe889f55571611c5f0f5acf06 (diff)
automatic import of python-nbautoevalopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-nbautoeval.spec395
-rw-r--r--sources1
3 files changed, 397 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..d5e6abc 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/nbautoeval-1.7.0.tar.gz
diff --git a/python-nbautoeval.spec b/python-nbautoeval.spec
new file mode 100644
index 0000000..7a8cfac
--- /dev/null
+++ b/python-nbautoeval.spec
@@ -0,0 +1,395 @@
+%global _empty_manifest_terminate_build 0
+Name: python-nbautoeval
+Version: 1.7.0
+Release: 1
+Summary: A mini framework to implement auto-evaluated exercises in Jupyter notebooks
+License: CC BY-SA 4.0
+URL: https://github.com/parmentelat/nbautoeval
+Source0: https://mirrors.aliyun.com/pypi/web/packages/3f/6e/f07bd2278329709c2a76fddc6c7d11f9235dd141a9b10ce904f0e07a4fde/nbautoeval-1.7.0.tar.gz
+BuildArch: noarch
+
+Requires: python3-ipython
+Requires: python3-ipywidgets
+Requires: python3-numpy
+Requires: python3-PyYAML
+Requires: python3-myst-parser
+
+%description
+# `nbautoeval`
+
+`nbautoeval` is a very lightweight python framework for creating **auto-evaluated**
+exercises inside a jupyter (python) notebook.
+
+two flavours of exercises are supported at this point :
+
+* code-oriented : given a text that describes the expectations, students are invited to
+ write their own code, and can then see the outcome on teacher-defined data samples,
+ compared with the results obtained through a teacher-provided solution, with a visual
+ (green/red) feedback
+* quizzes : a separate module allows to create quizzes
+
+At this point, due to lack of knowledge/documentation about open/edx (read: the
+version running at FUN), there is no available code for exporting the results as
+grades or anything similar (hence the `autoeval` name).
+
+There indeed are provisions in the code to accumulate statistics on all
+attempted corrections, as an attempt to provide feedback to teachers.
+
+# Try it on `mybinder`
+
+Click the badge below to see a few sample demos under `mybinder.org` - it's all
+in the `demo-notebooks` subdir.
+
+**NOTE** the demo notebooks ship under a `.py` format and require `jupytext` to be
+installed before you can open them in Jupyter.
+
+[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/parmentelat/nbautoeval/master?filepath=demo-notebooks)
+
+
+# History
+
+This was initially embedded into a [MOOC on
+python2](https://github.com/parmentelat/flotpython) that ran for the first time on [the
+French FUN platform](https://www.france-universite-numerique-mooc.fr/) in Fall 2014. It
+was then duplicated into a [MOOC on
+bioinformatics](https://github.com/parmentelat/flotbioinfo) in Spring 2016 where it was
+named `nbautoeval` for the first time, but still embedded in a greater git module.
+
+The current git repo is created in June 2016 from that basis, with the intention
+to be used as a git subtree from these 2 repos, and possibly others since a few
+people have proved interested.
+
+# Installation
+
+```
+pip install nbautoeval
+```
+
+# Overview
+
+## code-oriented
+
+Currently supports the following types of exercises
+ * `ExerciseFunction` : the student is asked to write a function
+ * `ExerciseRegexp` : the student is asked to write a regular expression
+ * `ExerciseGenerator` : the student is asked to write a generator function
+ * `ExerciseClass` : tests will happen on a class implementation
+
+A teacher who wishes to implement an exercise needs to write 2 parts :
+
+* One python file that defines an instance of an exercise class; this in a nutshell
+ typically involves
+ * providing one solution (let's say a function) written in Python
+ * providing a set of input data
+ * plus optionnally various tweaks for rendering results
+
+* One notebook that imports this exercise object, and can then take advantage of it to
+ write jupyter cells that typically
+ * invoke `example()` on the exercise object to show examples of the expected output
+ * invite the student to write their own code
+ * invoke `correction()` on the exercise object to display the outcome.
+
+## quizzes
+
+Here again there will be 2 parts at work :
+
+* The recommended way is to define quizzes in YAML format :
+ * one YAML file can contain several quizzes - see examples in the `yaml/` subdir
+ * and each quiz contain a set of questions
+ * grouping questions into quizzes essentially makes sense wrt the maximal number of
+ attempts
+ * mostly all the pieces can be written in markdown (currently we use `myst_parser`)
+
+* then one invokes `run_yaml_quiz()` from a notebook to display the test
+ * this function takes 2 arguments, one to help locate the YAML file
+ * one to spot the quiz inside the YAML file
+ * run with `debug=True` to pinpoint errors in the source
+
+## results and storage
+
+Regardless of their type all tests have an `exoname` that is used to store information
+about that specific test; for quizzes it is recommended to use a different name than
+the quiz name used in `run_yaml_quiz()` so that students cant guess it too easily.
+
+stuff is stored in 2 separate locations :
+
+* `~/.nbautoeval.trace` contain one JSON line per attempt (correction or submit)
+* `~/.nbautoeval.storage` for quizzes only, preserves previous choices, number of attempts
+
+# Known issues
+
+see https://github.com/parmentelat/nbautoeval/issues
+
+
+
+
+%package -n python3-nbautoeval
+Summary: A mini framework to implement auto-evaluated exercises in Jupyter notebooks
+Provides: python-nbautoeval
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-nbautoeval
+# `nbautoeval`
+
+`nbautoeval` is a very lightweight python framework for creating **auto-evaluated**
+exercises inside a jupyter (python) notebook.
+
+two flavours of exercises are supported at this point :
+
+* code-oriented : given a text that describes the expectations, students are invited to
+ write their own code, and can then see the outcome on teacher-defined data samples,
+ compared with the results obtained through a teacher-provided solution, with a visual
+ (green/red) feedback
+* quizzes : a separate module allows to create quizzes
+
+At this point, due to lack of knowledge/documentation about open/edx (read: the
+version running at FUN), there is no available code for exporting the results as
+grades or anything similar (hence the `autoeval` name).
+
+There indeed are provisions in the code to accumulate statistics on all
+attempted corrections, as an attempt to provide feedback to teachers.
+
+# Try it on `mybinder`
+
+Click the badge below to see a few sample demos under `mybinder.org` - it's all
+in the `demo-notebooks` subdir.
+
+**NOTE** the demo notebooks ship under a `.py` format and require `jupytext` to be
+installed before you can open them in Jupyter.
+
+[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/parmentelat/nbautoeval/master?filepath=demo-notebooks)
+
+
+# History
+
+This was initially embedded into a [MOOC on
+python2](https://github.com/parmentelat/flotpython) that ran for the first time on [the
+French FUN platform](https://www.france-universite-numerique-mooc.fr/) in Fall 2014. It
+was then duplicated into a [MOOC on
+bioinformatics](https://github.com/parmentelat/flotbioinfo) in Spring 2016 where it was
+named `nbautoeval` for the first time, but still embedded in a greater git module.
+
+The current git repo is created in June 2016 from that basis, with the intention
+to be used as a git subtree from these 2 repos, and possibly others since a few
+people have proved interested.
+
+# Installation
+
+```
+pip install nbautoeval
+```
+
+# Overview
+
+## code-oriented
+
+Currently supports the following types of exercises
+ * `ExerciseFunction` : the student is asked to write a function
+ * `ExerciseRegexp` : the student is asked to write a regular expression
+ * `ExerciseGenerator` : the student is asked to write a generator function
+ * `ExerciseClass` : tests will happen on a class implementation
+
+A teacher who wishes to implement an exercise needs to write 2 parts :
+
+* One python file that defines an instance of an exercise class; this in a nutshell
+ typically involves
+ * providing one solution (let's say a function) written in Python
+ * providing a set of input data
+ * plus optionnally various tweaks for rendering results
+
+* One notebook that imports this exercise object, and can then take advantage of it to
+ write jupyter cells that typically
+ * invoke `example()` on the exercise object to show examples of the expected output
+ * invite the student to write their own code
+ * invoke `correction()` on the exercise object to display the outcome.
+
+## quizzes
+
+Here again there will be 2 parts at work :
+
+* The recommended way is to define quizzes in YAML format :
+ * one YAML file can contain several quizzes - see examples in the `yaml/` subdir
+ * and each quiz contain a set of questions
+ * grouping questions into quizzes essentially makes sense wrt the maximal number of
+ attempts
+ * mostly all the pieces can be written in markdown (currently we use `myst_parser`)
+
+* then one invokes `run_yaml_quiz()` from a notebook to display the test
+ * this function takes 2 arguments, one to help locate the YAML file
+ * one to spot the quiz inside the YAML file
+ * run with `debug=True` to pinpoint errors in the source
+
+## results and storage
+
+Regardless of their type all tests have an `exoname` that is used to store information
+about that specific test; for quizzes it is recommended to use a different name than
+the quiz name used in `run_yaml_quiz()` so that students cant guess it too easily.
+
+stuff is stored in 2 separate locations :
+
+* `~/.nbautoeval.trace` contain one JSON line per attempt (correction or submit)
+* `~/.nbautoeval.storage` for quizzes only, preserves previous choices, number of attempts
+
+# Known issues
+
+see https://github.com/parmentelat/nbautoeval/issues
+
+
+
+
+%package help
+Summary: Development documents and examples for nbautoeval
+Provides: python3-nbautoeval-doc
+%description help
+# `nbautoeval`
+
+`nbautoeval` is a very lightweight python framework for creating **auto-evaluated**
+exercises inside a jupyter (python) notebook.
+
+two flavours of exercises are supported at this point :
+
+* code-oriented : given a text that describes the expectations, students are invited to
+ write their own code, and can then see the outcome on teacher-defined data samples,
+ compared with the results obtained through a teacher-provided solution, with a visual
+ (green/red) feedback
+* quizzes : a separate module allows to create quizzes
+
+At this point, due to lack of knowledge/documentation about open/edx (read: the
+version running at FUN), there is no available code for exporting the results as
+grades or anything similar (hence the `autoeval` name).
+
+There indeed are provisions in the code to accumulate statistics on all
+attempted corrections, as an attempt to provide feedback to teachers.
+
+# Try it on `mybinder`
+
+Click the badge below to see a few sample demos under `mybinder.org` - it's all
+in the `demo-notebooks` subdir.
+
+**NOTE** the demo notebooks ship under a `.py` format and require `jupytext` to be
+installed before you can open them in Jupyter.
+
+[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/parmentelat/nbautoeval/master?filepath=demo-notebooks)
+
+
+# History
+
+This was initially embedded into a [MOOC on
+python2](https://github.com/parmentelat/flotpython) that ran for the first time on [the
+French FUN platform](https://www.france-universite-numerique-mooc.fr/) in Fall 2014. It
+was then duplicated into a [MOOC on
+bioinformatics](https://github.com/parmentelat/flotbioinfo) in Spring 2016 where it was
+named `nbautoeval` for the first time, but still embedded in a greater git module.
+
+The current git repo is created in June 2016 from that basis, with the intention
+to be used as a git subtree from these 2 repos, and possibly others since a few
+people have proved interested.
+
+# Installation
+
+```
+pip install nbautoeval
+```
+
+# Overview
+
+## code-oriented
+
+Currently supports the following types of exercises
+ * `ExerciseFunction` : the student is asked to write a function
+ * `ExerciseRegexp` : the student is asked to write a regular expression
+ * `ExerciseGenerator` : the student is asked to write a generator function
+ * `ExerciseClass` : tests will happen on a class implementation
+
+A teacher who wishes to implement an exercise needs to write 2 parts :
+
+* One python file that defines an instance of an exercise class; this in a nutshell
+ typically involves
+ * providing one solution (let's say a function) written in Python
+ * providing a set of input data
+ * plus optionnally various tweaks for rendering results
+
+* One notebook that imports this exercise object, and can then take advantage of it to
+ write jupyter cells that typically
+ * invoke `example()` on the exercise object to show examples of the expected output
+ * invite the student to write their own code
+ * invoke `correction()` on the exercise object to display the outcome.
+
+## quizzes
+
+Here again there will be 2 parts at work :
+
+* The recommended way is to define quizzes in YAML format :
+ * one YAML file can contain several quizzes - see examples in the `yaml/` subdir
+ * and each quiz contain a set of questions
+ * grouping questions into quizzes essentially makes sense wrt the maximal number of
+ attempts
+ * mostly all the pieces can be written in markdown (currently we use `myst_parser`)
+
+* then one invokes `run_yaml_quiz()` from a notebook to display the test
+ * this function takes 2 arguments, one to help locate the YAML file
+ * one to spot the quiz inside the YAML file
+ * run with `debug=True` to pinpoint errors in the source
+
+## results and storage
+
+Regardless of their type all tests have an `exoname` that is used to store information
+about that specific test; for quizzes it is recommended to use a different name than
+the quiz name used in `run_yaml_quiz()` so that students cant guess it too easily.
+
+stuff is stored in 2 separate locations :
+
+* `~/.nbautoeval.trace` contain one JSON line per attempt (correction or submit)
+* `~/.nbautoeval.storage` for quizzes only, preserves previous choices, number of attempts
+
+# Known issues
+
+see https://github.com/parmentelat/nbautoeval/issues
+
+
+
+
+%prep
+%autosetup -n nbautoeval-1.7.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-nbautoeval -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Jun 20 2023 Python_Bot <Python_Bot@openeuler.org> - 1.7.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..081f7fd
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+3cdfbf4285b05c42f952d8c8ac4c8889 nbautoeval-1.7.0.tar.gz