%global _empty_manifest_terminate_build 0 Name: python-archr Version: 9.2.49 Release: 1 Summary: Target-centric program analysis. License: BSD-2-Clause URL: https://github.com/angr/archr Source0: https://mirrors.nju.edu.cn/pypi/web/packages/36/04/584a60229dc9a613165f8fc5fbf7cdcc725f799666c6d560a7a49f33c7ab/archr-9.2.49.tar.gz BuildArch: noarch Requires: python3-cle Requires: python3-docker Requires: python3-nclib Requires: python3-patchelf-wrapper Requires: python3-ply Requires: python3-pygdbmi Requires: python3-shellphish-qemu Requires: python3-angr Requires: python3-bintrace Requires: python3-qtrace %description # archr [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) Traditionally, binary analysis has been implicitly _program-centric_, meaning that the atomic unit of concern is the binary being analyzed. This assumption is usually implicit: `angr.Project` is instantiated with the binary in question, `afl` launches the binary itself, generally hyper-modified to make it easier to fuzz, and so on. However, outside of the CGC, programs do not exist in a vacuum. Specific library versions, values in configuration files, environment variables, and a myriad other factors combine with the program binary itself to make a unique holistic _target_, and in many cases, it is that target that needs to be analyzed, not just the program itself. This is specifically true for analysis that need extreme accuracy, such as automatic exploit generation. `archr` is an implementation of such a _target-centric_ analysis paradigm. It consists of two main concepts: `Targets`, which describe the specification of the target itself, how it is configured, how it will be launched, and how it would be interacted with, and `Analyzers`, which specialize targets for specific analysis actions, such as tracing, symbolic execution, and so on. To accomplish their tasks, Analyzers might inject `Implants` (i.e., qemu-user, gdbserver, and so on) into the target. We have the following Targets: * DockerImageTarget, which takes a description of the target in the form of a docker image * LocalTarget, which just describes running the target in the local system The following Analyzers exist: - DataScoutAnalyzer (will grabs the memory map, environment, and auxv of the process, exactly as it is at launch) - AngrProjectAnalyzer (can create an angr project with the right libs at the right offsets) - AngrStateAnalyzer (can create a angr states with the right env, args, and fs) - QEMUTraceAnalyzer (does qemu tracing of the target) - GDBServerAnalyzer (launches the target in a gdbserver) - STraceAnalyzer (straces a target) - CoreAnalyzer (launches the target and retrieves a core) - InputFDAnalyzer (determines the FD number for user input (in some cases)) ## Using archr To use archr, one must first create a Target. First, build a docker image that launches your target. Here is an example dockerfile for a `docker-cat` image: ``` from ubuntu:latest entrypoint ["/bin/cat"] ``` Then, load it as a target: ``` import archr t = archr.targets.DockerImageTarget('docker-cat').build() ``` And _viola!_, your target is ready to use. archr will automatically figure out how your binary runs inside your target, and then you can launch and interact with it: ``` t.start() assert t.run_command(stdin=subprocess.DEVNULL).wait() == 0 t.stop() ``` archr makes heavy use of `with` contexts, which will help clean up resources. Embrace them. For example, you can: ``` with t.start(): with t.run_context() as p: print(p,"is a subprocess.Popen object!") p.stdin.write("hello") assert p.stdout.read(5) == "hello" ``` There is even a context that will allow you to temporarily replace files on the target! ``` with t.start(): with t.replacemenet_context("/etc/passwd", "hahaha"), t.run_context(args_suffix=["/etc/passwd"]) as p: assert p.stdout.read() == "hahaha" assert t.run_command(args_suffix=["/etc/passwd"]).stdout.read() != "hahaha" ``` And even one that will _temporarily replace the target binary's code with shellcode_: ``` with t.start(): with t.shellcode_context(asm_code="mov rax, 60; mov rdi, 42; syscall") as p: assert p.wait() == 42 ``` You can retrieve files from the target with `retrieve_contents`, `retrieve_paths`, and `retrieve_glob`, inject files with `inject_path`, `inject_contents`, and so on, get network endpoints using `ipv4_address`, `udp_ports`, and `tcp_ports`, and some other interesting stuff! You can also make a `LocalTarget` to just run stuff on your host, and it is almost perfectly interchange with a DockerTarget: ``` import archr t = archr.targets.LocalTarget(["/bin/cat"]).build() ``` To figure out how to run the binary, `LocalTarget` takes at least an argv list. You can also pass in an env. Keep in mind that since some of the above examples need write access to files, you will need to use writable files instead of `/etc/passwd` and `/bin/cat`. ## Caveats Some caveats at the moment: - archr does not handle string-specified (as opposed to array-specified) entrypoint directives in the docker file. This isn't hard; we just haven't gotten around to it (see issue #1). %package -n python3-archr Summary: Target-centric program analysis. Provides: python-archr BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-archr # archr [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) Traditionally, binary analysis has been implicitly _program-centric_, meaning that the atomic unit of concern is the binary being analyzed. This assumption is usually implicit: `angr.Project` is instantiated with the binary in question, `afl` launches the binary itself, generally hyper-modified to make it easier to fuzz, and so on. However, outside of the CGC, programs do not exist in a vacuum. Specific library versions, values in configuration files, environment variables, and a myriad other factors combine with the program binary itself to make a unique holistic _target_, and in many cases, it is that target that needs to be analyzed, not just the program itself. This is specifically true for analysis that need extreme accuracy, such as automatic exploit generation. `archr` is an implementation of such a _target-centric_ analysis paradigm. It consists of two main concepts: `Targets`, which describe the specification of the target itself, how it is configured, how it will be launched, and how it would be interacted with, and `Analyzers`, which specialize targets for specific analysis actions, such as tracing, symbolic execution, and so on. To accomplish their tasks, Analyzers might inject `Implants` (i.e., qemu-user, gdbserver, and so on) into the target. We have the following Targets: * DockerImageTarget, which takes a description of the target in the form of a docker image * LocalTarget, which just describes running the target in the local system The following Analyzers exist: - DataScoutAnalyzer (will grabs the memory map, environment, and auxv of the process, exactly as it is at launch) - AngrProjectAnalyzer (can create an angr project with the right libs at the right offsets) - AngrStateAnalyzer (can create a angr states with the right env, args, and fs) - QEMUTraceAnalyzer (does qemu tracing of the target) - GDBServerAnalyzer (launches the target in a gdbserver) - STraceAnalyzer (straces a target) - CoreAnalyzer (launches the target and retrieves a core) - InputFDAnalyzer (determines the FD number for user input (in some cases)) ## Using archr To use archr, one must first create a Target. First, build a docker image that launches your target. Here is an example dockerfile for a `docker-cat` image: ``` from ubuntu:latest entrypoint ["/bin/cat"] ``` Then, load it as a target: ``` import archr t = archr.targets.DockerImageTarget('docker-cat').build() ``` And _viola!_, your target is ready to use. archr will automatically figure out how your binary runs inside your target, and then you can launch and interact with it: ``` t.start() assert t.run_command(stdin=subprocess.DEVNULL).wait() == 0 t.stop() ``` archr makes heavy use of `with` contexts, which will help clean up resources. Embrace them. For example, you can: ``` with t.start(): with t.run_context() as p: print(p,"is a subprocess.Popen object!") p.stdin.write("hello") assert p.stdout.read(5) == "hello" ``` There is even a context that will allow you to temporarily replace files on the target! ``` with t.start(): with t.replacemenet_context("/etc/passwd", "hahaha"), t.run_context(args_suffix=["/etc/passwd"]) as p: assert p.stdout.read() == "hahaha" assert t.run_command(args_suffix=["/etc/passwd"]).stdout.read() != "hahaha" ``` And even one that will _temporarily replace the target binary's code with shellcode_: ``` with t.start(): with t.shellcode_context(asm_code="mov rax, 60; mov rdi, 42; syscall") as p: assert p.wait() == 42 ``` You can retrieve files from the target with `retrieve_contents`, `retrieve_paths`, and `retrieve_glob`, inject files with `inject_path`, `inject_contents`, and so on, get network endpoints using `ipv4_address`, `udp_ports`, and `tcp_ports`, and some other interesting stuff! You can also make a `LocalTarget` to just run stuff on your host, and it is almost perfectly interchange with a DockerTarget: ``` import archr t = archr.targets.LocalTarget(["/bin/cat"]).build() ``` To figure out how to run the binary, `LocalTarget` takes at least an argv list. You can also pass in an env. Keep in mind that since some of the above examples need write access to files, you will need to use writable files instead of `/etc/passwd` and `/bin/cat`. ## Caveats Some caveats at the moment: - archr does not handle string-specified (as opposed to array-specified) entrypoint directives in the docker file. This isn't hard; we just haven't gotten around to it (see issue #1). %package help Summary: Development documents and examples for archr Provides: python3-archr-doc %description help # archr [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) Traditionally, binary analysis has been implicitly _program-centric_, meaning that the atomic unit of concern is the binary being analyzed. This assumption is usually implicit: `angr.Project` is instantiated with the binary in question, `afl` launches the binary itself, generally hyper-modified to make it easier to fuzz, and so on. However, outside of the CGC, programs do not exist in a vacuum. Specific library versions, values in configuration files, environment variables, and a myriad other factors combine with the program binary itself to make a unique holistic _target_, and in many cases, it is that target that needs to be analyzed, not just the program itself. This is specifically true for analysis that need extreme accuracy, such as automatic exploit generation. `archr` is an implementation of such a _target-centric_ analysis paradigm. It consists of two main concepts: `Targets`, which describe the specification of the target itself, how it is configured, how it will be launched, and how it would be interacted with, and `Analyzers`, which specialize targets for specific analysis actions, such as tracing, symbolic execution, and so on. To accomplish their tasks, Analyzers might inject `Implants` (i.e., qemu-user, gdbserver, and so on) into the target. We have the following Targets: * DockerImageTarget, which takes a description of the target in the form of a docker image * LocalTarget, which just describes running the target in the local system The following Analyzers exist: - DataScoutAnalyzer (will grabs the memory map, environment, and auxv of the process, exactly as it is at launch) - AngrProjectAnalyzer (can create an angr project with the right libs at the right offsets) - AngrStateAnalyzer (can create a angr states with the right env, args, and fs) - QEMUTraceAnalyzer (does qemu tracing of the target) - GDBServerAnalyzer (launches the target in a gdbserver) - STraceAnalyzer (straces a target) - CoreAnalyzer (launches the target and retrieves a core) - InputFDAnalyzer (determines the FD number for user input (in some cases)) ## Using archr To use archr, one must first create a Target. First, build a docker image that launches your target. Here is an example dockerfile for a `docker-cat` image: ``` from ubuntu:latest entrypoint ["/bin/cat"] ``` Then, load it as a target: ``` import archr t = archr.targets.DockerImageTarget('docker-cat').build() ``` And _viola!_, your target is ready to use. archr will automatically figure out how your binary runs inside your target, and then you can launch and interact with it: ``` t.start() assert t.run_command(stdin=subprocess.DEVNULL).wait() == 0 t.stop() ``` archr makes heavy use of `with` contexts, which will help clean up resources. Embrace them. For example, you can: ``` with t.start(): with t.run_context() as p: print(p,"is a subprocess.Popen object!") p.stdin.write("hello") assert p.stdout.read(5) == "hello" ``` There is even a context that will allow you to temporarily replace files on the target! ``` with t.start(): with t.replacemenet_context("/etc/passwd", "hahaha"), t.run_context(args_suffix=["/etc/passwd"]) as p: assert p.stdout.read() == "hahaha" assert t.run_command(args_suffix=["/etc/passwd"]).stdout.read() != "hahaha" ``` And even one that will _temporarily replace the target binary's code with shellcode_: ``` with t.start(): with t.shellcode_context(asm_code="mov rax, 60; mov rdi, 42; syscall") as p: assert p.wait() == 42 ``` You can retrieve files from the target with `retrieve_contents`, `retrieve_paths`, and `retrieve_glob`, inject files with `inject_path`, `inject_contents`, and so on, get network endpoints using `ipv4_address`, `udp_ports`, and `tcp_ports`, and some other interesting stuff! You can also make a `LocalTarget` to just run stuff on your host, and it is almost perfectly interchange with a DockerTarget: ``` import archr t = archr.targets.LocalTarget(["/bin/cat"]).build() ``` To figure out how to run the binary, `LocalTarget` takes at least an argv list. You can also pass in an env. Keep in mind that since some of the above examples need write access to files, you will need to use writable files instead of `/etc/passwd` and `/bin/cat`. ## Caveats Some caveats at the moment: - archr does not handle string-specified (as opposed to array-specified) entrypoint directives in the docker file. This isn't hard; we just haven't gotten around to it (see issue #1). %prep %autosetup -n archr-9.2.49 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-archr -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Fri May 05 2023 Python_Bot - 9.2.49-1 - Package Spec generated