pax_global_header 0000666 0000000 0000000 00000000064 15052160342 0014510 g ustar 00root root 0000000 0000000 52 comment=d84b79966583f4131e307809e6dd5590e7cba66f
python-einx-0.3.0/ 0000775 0000000 0000000 00000000000 15052160342 0013772 5 ustar 00root root 0000000 0000000 python-einx-0.3.0/.github/ 0000775 0000000 0000000 00000000000 15052160342 0015332 5 ustar 00root root 0000000 0000000 python-einx-0.3.0/.github/workflows/ 0000775 0000000 0000000 00000000000 15052160342 0017367 5 ustar 00root root 0000000 0000000 python-einx-0.3.0/.github/workflows/publish_pypi.yml 0000664 0000000 0000000 00000001132 15052160342 0022616 0 ustar 00root root 0000000 0000000 name: Publish package on PyPI
on:
release:
types: [published]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.10
uses: actions/setup-python@v3
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install build
- name: Build package
run: python -m build
- name: Publish package
uses: pypa/gh-action-pypi-publish@release/v1
with:
user: __token__
password: ${{ secrets.PYPI_API_TOKEN }}
python-einx-0.3.0/.github/workflows/run_pytest.yml 0000664 0000000 0000000 00000002616 15052160342 0022333 0 ustar 00root root 0000000 0000000 name: Test with pytest
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
permissions:
contents: read
jobs:
test_py38:
runs-on: ubuntu-latest
steps:
- name: Set up Python 3.8
uses: actions/setup-python@v3
with:
python-version: "3.8"
- name: Install optional dependencies
run: |
python -m pip install --upgrade pip
pip install pytest "jax[cpu]" flax torch tensorflow einops mlx dask tinygrad scipy
- uses: actions/checkout@v3
- name: Test with pytest
run: |
pip install .
EINX_FILTER_TRACEBACK=0 pytest
test_py310:
runs-on: ubuntu-latest
steps:
- name: Set up Python 3.10
uses: actions/setup-python@v3
with:
python-version: "3.10"
- name: Install optional dependencies
run: |
python -m pip install --upgrade pip
pip install pytest "jax[cpu]" flax dm-haiku torch tensorflow einops equinox mlx dask tinygrad scipy
pip install --upgrade keras
- uses: actions/checkout@v3
- name: Test with pytest
run: |
pip install .
EINX_FILTER_TRACEBACK=0 pytest
pip install --upgrade "torch==2.2.0"
EINX_FILTER_TRACEBACK=0 pytest
pip install --upgrade "torch==2.1.0"
EINX_FILTER_TRACEBACK=0 pytest
pip install --upgrade "torch==2.0.0"
EINX_FILTER_TRACEBACK=0 pytest python-einx-0.3.0/.gitignore 0000664 0000000 0000000 00000000111 15052160342 0015753 0 ustar 00root root 0000000 0000000 *.egg-info
docs/build
examples/cifar10
dist/*
act
__pycache__
/*.py python-einx-0.3.0/.readthedocs.yml 0000664 0000000 0000000 00000000567 15052160342 0017070 0 ustar 00root root 0000000 0000000 # .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the OS, Python version and other tools you might need
build:
os: ubuntu-20.04
tools:
python: "3.9"
sphinx:
configuration: docs/source/conf.py
python:
install:
- requirements: docs/requirements.txt python-einx-0.3.0/CHANGELOG.md 0000664 0000000 0000000 00000015630 15052160342 0015610 0 ustar 00root root 0000000 0000000 # Changelog
## [0.3.0]
### Added
- Add partial support for [tinygrad](https://github.com/tinygrad/tinygrad).
- Supported:
- `einx.rearrange`
- `einx.{elementwise|add|multiply|where|...}`
- `einx.{reduce|sum|mean|...}`
- `einx.{vmap_with_axis|flip|softmax|...}`
- `einx.dot`
- Not supported:
- `einx.vmap` (no `vmap` in tinygrad)
- `einx.{index|get_at|set_at|...}` (due to relying on `einx.vmap`)
### Changed
- Use `tf.gather_nd` instead of `x[y]` to implement `einx.get_at` for Tensorflow.
### Fixed
- Allow empty tuples and lists as constraints for ellipsis parameters.
- Fix shorthand notation in `einx.dot`.
## [0.2.2]
### Added
- Add [`einx.experimental.shard`](https://einx.readthedocs.io/en/latest/api.html#einx.experimental.shard).
### Fixed
- Fix bug when calling einx from multiple threads. (Run unit tests also in multi-threaded context.)
## [0.2.1]
### Changed
- **Remove einx dependency in compiled code:** The code for a traced function now directly imports and uses the namespace
of the backend (e.g. `import torch`). For example:
```python
>>> print(einx.dot("b q (h c), b k (h c) -> b q k h", x, y, h=16, graph=True))
import torch
def op0(i0, i1):
x0 = torch.reshape(i0, (16, 768, 16, 64))
x1 = torch.reshape(i1, (16, 768, 16, 64))
x2 = torch.einsum("abcd,aecd->abec", x0, x1)
return x2
```
In most cases, compiled functions now contain no reference to other einx code.
- **Improve handling of Python scalars:** (see https://github.com/fferflo/einx/issues/7) einx now only converts `int`, `float` and `bool` to tensor
objects (e.g. via `torch.asarray`) if the backend function that is called does not support Python scalars (previously all inputs were converted
to tensor objects). When using PyTorch, the `device` argument will be used to place the constructed tensor on the correct
device.
For example, `torch.add` supports Python scalars
```python
>>> print(einx.add("a,", x, 1, graph=True))
import torch
def op0(i0, i1):
x0 = torch.add(i0, i1)
return x0
```
while `torch.maximum` does not:
```python
>>> print(einx.maximum("a,", x, 1, graph=True))
import torch
def op0(i0, i1):
x0 = torch.asarray(i1, device=i0.device)
x1 = torch.maximum(i0, x0)
return x1
```
- Run unit tests for PyTorch and Jax also on the GPU (if it is available).
- Run unit tests also with `jax.jit` and `torch.compile`.
### Fixed
- Add workarounds for issues with `torch.compile`: https://github.com/pytorch/pytorch/issues/94674 and https://github.com/pytorch/pytorch/issues/124269
## [0.2.0]
### Added
- Add partial support for Apple's [mlx](https://github.com/ml-explore/mlx).
- Supported:
- `einx.rearrange`
- `einx.{elementwise|add|multiply|where|...}`
- `einx.{reduce|sum|mean|...}`
- `einx.{vmap_with_axis|flip|softmax|...}`
- Not supported yet:
- `einx.dot` (`mx.einsum` is not implemented yet)
- `einx.vmap` (`mx.vmap` does not fully support all primitives yet)
- `einx.{index|get_at|set_at|...}` (due to relying on `einx.vmap`)
- Add partial support for [dask.array](https://docs.dask.org/en/stable/array.html).
- Supported:
- `einx.rearrange`
- `einx.{elementwise|add|multiply|where|...}`
- `einx.{reduce|sum|mean|...}`
- `einx.{vmap_with_axis|flip|softmax|...}`
- `einx.dot`
- Not supported:
- `einx.vmap` (`vmap` not implemented in dask)
- `einx.{index|get_at|set_at|...}` (due to relying on `einx.vmap`)
- Add environment variable `EINX_WARN_ON_RETRACE` to warn when excessive retracing is detected.
### Changed
- Allow `->` and `,` to be composed with other operators. (This deprecates the existing `[|]` notation which should instead be implemented with
composable `->`. The feature is still maintained for backwards compatibility). For example:
- `einx.dot("b [c1->c2]", ...)` expands to `einx.dot("b [c1] -> b [c2]", ...)`
- `einx.get_at("b p [i,->]", ...)` expands to `einx.get_at("b p [i], b p -> b p", ...)`
- Allow `einx.{set_at|add_at|...}` to be called with zero-sized updates or coordinates (in which case the input tensor is returned as-is).
- Remove `backend.dot` which was not used anywhere but in the unit tests.
- Improve error reporting:
- Drop internal stack frames when raising exceptions.
- Better error when passing invalid shape constraints to einx functions.
- Reduce overhead of einx when using the PyTorch backend.
### Fixed
- Fix compatibility of `einx.nn.torch.Norm` with PyTorch 2.2.
- Fix parameters in `einn.param` being ignored.
- Fix bug when using concatenations in `einx.rearrange`. See: https://github.com/fferflo/einx/issues/6
- Fix broadcasting new axes in `einx.vmap_with_axis`.
- Disable `torch.compile` during graph construction using [torch.compiler.disable](https://pytorch.org/docs/stable/generated/torch.compiler.disable.html).
## [0.1.3]
### Added
- Add option to install einx via `pip install einx[torch]` or `pip install einx[keras]` to enforce version requirements on PyTorch or Keras.
### Changed
- Fail gracefully and report error when run with incompatible version of PyTorch and Keras.
### Fixed
- Fix compatibility with 2.0 <= PyTorch < 2.1.
## [0.1.2]
### Added
- Add type annotations to public API.
- Allow passing multiple coordinate tensors in `einx.{get_at|set_at|...}`.
- Allow implicit output shape in `einx.{set_at|add_at|...}`.
- Allow passing backend with string argument to `einx.nn.norm`.
- Make backends accessible as `einx.backend.{NAME}` once they are loaded.
### Changed
- Refactor tracing:
- Trace vmapped functions (previously kept a pointer to an untraced function).
- Add shape assertion when calling unsafe functions.
- Add comments for better inspection.
- Remove `pass_backend` argument from `einx.vmap`.
- Cache different functions for different backends.
- Don't call `backend.to_tensor` if input already has correct type.
For example, tracing `einx.get_at` now gives the following jit-compiled code:
```python
>>> print(einx.get_at("b [h w] c, b p [2] -> b p c", x, y, graph=True))
# backend: einx.backend.numpy
def op1(i0, i1):
x1 = i1[:, 0]
x2 = i1[:, 1]
x0 = backend.get_at(i0, (x1, x2))
return (x0,)
def op0(i0, i1, op1=op1):
op2 = backend.vmap(op1, in_axes=(0, 0), out_axes=(0,))
op3 = backend.vmap(op2, in_axes=(3, None), out_axes=(2,))
x0 = op3(i0, i1)
return x0[0]
```
### Fixed
- Fix bug when using "1" as coordinate axis in einx.index.
- Add workaround for scalar indexing operations with torch.vmap (see https://github.com/pytorch/functorch/issues/747).
- Fix support for list/ tuple arguments as tensors with non-trivial shape.
- Change einx.reduce to accept only single tensors as arguments (API allowed multiple arguments, but was not implemented).
- Don't trace and jit functions if EINX_CACHE_SIZE=0.
- Fix bug where some static code analysis tools fail to recognize function specializations. python-einx-0.3.0/LICENSE 0000664 0000000 0000000 00000002106 15052160342 0014776 0 ustar 00root root 0000000 0000000 MIT License
Copyright (c) 2023- Florian Fervers
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
python-einx-0.3.0/README.md 0000664 0000000 0000000 00000014120 15052160342 0015247 0 ustar 00root root 0000000 0000000 # *einx* - Universal Tensor Operations in Einstein-Inspired Notation
[](https://github.com/fferflo/einx/actions/workflows/run_pytest.yml)
[](https://einx.readthedocs.io)
[](https://badge.fury.io/py/einx)
[](https://www.python.org/downloads/release/python-380/)
einx is a Python library that provides a universal interface to formulate tensor operations in frameworks such as Numpy, PyTorch, Jax and Tensorflow. The design is based on the following principles:
1. **Provide a set of elementary tensor operations** following Numpy-like naming: `einx.{sum|max|where|add|dot|flip|get_at|...}`
2. **Use einx notation to express vectorization of the elementary operations.** einx notation is inspired by [einops](https://github.com/arogozhnikov/einops), but introduces several novel concepts such as `[]`-bracket notation and full composability that allow using it as a universal language for tensor operations.
einx can be integrated and mixed with existing code seamlessly. All operations are [just-in-time compiled](https://einx.readthedocs.io/en/latest/more/jit.html) into regular Python functions using Python's [exec()](https://docs.python.org/3/library/functions.html#exec) and invoke operations from the respective framework.
**Getting started:**
* [Tutorial](https://einx.readthedocs.io/en/latest/gettingstarted/tutorial_overview.html)
* [Example: GPT-2 with einx](https://einx.readthedocs.io/en/latest/gettingstarted/gpt2.html)
* [How is einx different from einops?](https://einx.readthedocs.io/en/latest/faq/einops.html)
* [How is einx notation universal?](https://einx.readthedocs.io/en/latest/faq/universal.html)
* [API reference](https://einx.readthedocs.io/en/latest/api.html)
## Installation
```python
pip install einx
```
See [Installation](https://einx.readthedocs.io/en/latest/gettingstarted/installation.html) for more information.
## What does einx look like?
#### Tensor manipulation
```python
import einx
x = {np.asarray|torch.as_tensor|jnp.asarray|...}(...) # Create some tensor
einx.sum("a [b]", x) # Sum-reduction along second axis
einx.flip("... (g [c])", x, c=2) # Flip pairs of values along the last axis
einx.mean("b [s...] c", x) # Spatial mean-pooling
einx.sum("b (s [s2])... c", x, s2=2) # Sum-pooling with kernel_size=stride=2
einx.add("a, b -> a b", x, y) # Outer sum
einx.get_at("b [h w] c, b i [2] -> b i c", x, y) # Gather values at coordinates
einx.rearrange("b (q + k) -> b q, b k", x, q=2) # Split
einx.rearrange("b c, 1 -> b (c + 1)", x, [42]) # Append number to each channel
# Apply custom operations:
einx.vmap("b [s...] c -> b c", x, op=np.mean) # Spatial mean-pooling
einx.vmap("a [b], [b] c -> a c", x, y, op=np.dot) # Matmul
```
All einx functions simply forward computation to the respective backend, e.g. by internally calling `np.reshape`, `np.transpose`, `np.sum` with the appropriate arguments.
#### Common neural network operations
```python
# Layer normalization
mean = einx.mean("b... [c]", x, keepdims=True)
var = einx.var("b... [c]", x, keepdims=True)
x = (x - mean) * torch.rsqrt(var + epsilon)
# Prepend class token
einx.rearrange("b s... c, c -> b (1 + (s...)) c", x, cls_token)
# Multi-head attention
attn = einx.dot("b q (h c), b k (h c) -> b q k h", q, k, h=8)
attn = einx.softmax("b q [k] h", attn)
x = einx.dot("b q k h, b k (h c) -> b q (h c)", attn, v)
# Matmul in linear layers
einx.dot("b... [c1->c2]", x, w) # - Regular
einx.dot("b... (g [c1->c2])", x, w) # - Grouped: Same weights per group
einx.dot("b... ([g c1->g c2])", x, w) # - Grouped: Different weights per group
einx.dot("b [s...->s2] c", x, w) # - Spatial mixing as in MLP-mixer
```
See [Common neural network ops](https://einx.readthedocs.io/en/latest/gettingstarted/commonnnops.html) for more examples.
#### Optional: Deep learning modules
```python
import einx.nn.{torch|flax|haiku|equinox|keras} as einn
batchnorm = einn.Norm("[b...] c", decay_rate=0.9)
layernorm = einn.Norm("b... [c]") # as used in transformers
instancenorm = einn.Norm("b [s...] c")
groupnorm = einn.Norm("b [s...] (g [c])", g=8)
rmsnorm = einn.Norm("b... [c]", mean=False, bias=False)
channel_mix = einn.Linear("b... [c1->c2]", c2=64)
spatial_mix1 = einn.Linear("b [s...->s2] c", s2=64)
spatial_mix2 = einn.Linear("b [s2->s...] c", s=(64, 64))
patch_embed = einn.Linear("b (s [s2->])... [c1->c2]", s2=4, c2=64)
dropout = einn.Dropout("[...]", drop_rate=0.2)
spatial_dropout = einn.Dropout("[b] ... [c]", drop_rate=0.2)
droppath = einn.Dropout("[b] ...", drop_rate=0.2)
```
See `examples/train_{torch|flax|haiku|equinox|keras}.py` for example trainings on CIFAR10, [GPT-2](https://einx.readthedocs.io/en/latest/gettingstarted/gpt2.html) and [Mamba](https://github.com/fferflo/weightbridge/blob/master/examples/mamba2flax.py) for working example implementations of language models using einx, and [Tutorial: Neural networks](https://einx.readthedocs.io/en/latest/gettingstarted/tutorial_neuralnetworks.html) for more details.
#### Just-in-time compilation
einx traces the required backend operations for a given call into graph representation and just-in-time compiles them into a regular Python function using Python's [`exec()`](https://docs.python.org/3/library/functions.html#exec). This reduces overhead to a single cache lookup and allows inspecting the generated function. For example:
```python
>>> x = np.zeros((3, 10, 10))
>>> graph = einx.sum("... (g [c])", x, g=2, graph=True)
>>> print(graph)
import numpy as np
def op0(i0):
x0 = np.reshape(i0, (3, 10, 2, 5))
x1 = np.sum(x0, axis=3)
return x1
```
See [Just-in-time compilation](https://einx.readthedocs.io/en/latest/more/jit.html) for more details. python-einx-0.3.0/docs/ 0000775 0000000 0000000 00000000000 15052160342 0014722 5 ustar 00root root 0000000 0000000 python-einx-0.3.0/docs/Makefile 0000664 0000000 0000000 00000001176 15052160342 0016367 0 ustar 00root root 0000000 0000000 # Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
python-einx-0.3.0/docs/make.bat 0000664 0000000 0000000 00000001444 15052160342 0016332 0 ustar 00root root 0000000 0000000 @ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)
if "%1" == "" goto help
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd
python-einx-0.3.0/docs/requirements.txt 0000664 0000000 0000000 00000000126 15052160342 0020205 0 ustar 00root root 0000000 0000000 sphinx>=6.0.0
sphinx-autodoc-typehints
sphinx-book-theme>=1.0.1
.
dm-haiku
flax
torch python-einx-0.3.0/docs/source/ 0000775 0000000 0000000 00000000000 15052160342 0016222 5 ustar 00root root 0000000 0000000 python-einx-0.3.0/docs/source/api.rst 0000664 0000000 0000000 00000005201 15052160342 0017523 0 ustar 00root root 0000000 0000000 ########
einx API
########
Main
----
.. autofunction:: einx.rearrange
.. autofunction:: einx.vmap_with_axis
.. autofunction:: einx.vmap
.. autofunction:: einx.reduce
.. autofunction:: einx.elementwise
.. autofunction:: einx.index
Reduction operations
--------------------
.. autofunction:: einx.sum
.. autofunction:: einx.mean
.. autofunction:: einx.var
.. autofunction:: einx.std
.. autofunction:: einx.prod
.. autofunction:: einx.count_nonzero
.. autofunction:: einx.any
.. autofunction:: einx.all
.. autofunction:: einx.max
.. autofunction:: einx.min
.. autofunction:: einx.logsumexp
Element-by-element operations
-----------------------------
.. autofunction:: einx.add
.. autofunction:: einx.subtract
.. autofunction:: einx.multiply
.. autofunction:: einx.true_divide
.. autofunction:: einx.floor_divide
.. autofunction:: einx.divide
.. autofunction:: einx.logical_and
.. autofunction:: einx.logical_or
.. autofunction:: einx.where
.. autofunction:: einx.less
.. autofunction:: einx.less_equal
.. autofunction:: einx.greater
.. autofunction:: einx.greater_equal
.. autofunction:: einx.equal
.. autofunction:: einx.not_equal
.. autofunction:: einx.maximum
.. autofunction:: einx.minimum
Indexing operations
-------------------
.. autofunction:: einx.get_at
.. autofunction:: einx.set_at
.. autofunction:: einx.add_at
.. autofunction:: einx.subtract_at
Miscellaneous operations
------------------------
.. autofunction:: einx.flip
.. autofunction:: einx.roll
.. autofunction:: einx.softmax
.. autofunction:: einx.log_softmax
.. autofunction:: einx.arange
General dot-product
-------------------
.. autofunction:: einx.dot
Deep Learning Modules
=====================
Haiku
-----
.. autoclass:: einx.nn.haiku.Linear
.. autoclass:: einx.nn.haiku.Norm
.. autoclass:: einx.nn.haiku.Dropout
.. autofunction:: einx.nn.haiku.param
Flax
----
.. autofunction:: einx.nn.flax.Linear
.. autofunction:: einx.nn.flax.Norm
.. autofunction:: einx.nn.flax.Dropout
.. autofunction:: einx.nn.flax.param
Torch
-----
.. autoclass:: einx.nn.torch.Linear
.. autoclass:: einx.nn.torch.Norm
.. autoclass:: einx.nn.torch.Dropout
.. autofunction:: einx.nn.torch.param
Equinox
-------
.. autoclass:: einx.nn.equinox.Linear
.. autoclass:: einx.nn.equinox.Norm
.. autoclass:: einx.nn.equinox.Dropout
.. autofunction:: einx.nn.equinox.param
Keras
-----
.. autoclass:: einx.nn.keras.Linear
.. autoclass:: einx.nn.keras.Norm
.. autoclass:: einx.nn.keras.Dropout
.. autofunction:: einx.nn.keras.param
Experimental
============
.. autofunction:: einx.experimental.shard python-einx-0.3.0/docs/source/conf.py 0000664 0000000 0000000 00000002402 15052160342 0017517 0 ustar 00root root 0000000 0000000 # Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = "einx"
copyright = "2024, Florian Fervers"
author = 'Florian Fervers'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.intersphinx",
"sphinx.ext.mathjax",
"sphinx.ext.napoleon",
"sphinx.ext.viewcode",
"sphinx_autodoc_typehints",
]
templates_path = []
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = "sphinx_book_theme"
html_theme_options = {
"show_toc_level": 2,
"repository_url": "https://github.com/fferflo/einx",
"use_repository_button": True,
}
html_static_path = []
python-einx-0.3.0/docs/source/faq/ 0000775 0000000 0000000 00000000000 15052160342 0016771 5 ustar 00root root 0000000 0000000 python-einx-0.3.0/docs/source/faq/backend.rst 0000664 0000000 0000000 00000002531 15052160342 0021113 0 ustar 00root root 0000000 0000000 How does einx support different tensor frameworks?
##################################################
einx provides interfaces for tensor frameworks in the ``einx.backend.*`` namespace. einx functions accept a ``backend`` argument
that defines which backend to use for the computation. For ``backend=None`` (the default case), the backend is implicitly determined
from the type of the input tensors.
.. code:: python
x = np.ones((2, 3))
einx.sum("a [b]", x, backend=einx.backend.get("numpy")) # Uses numpy backend
einx.sum("a [b]", x) # Implicitly uses numpy backend
Numpy tensors can be mixed with other frameworks in the same operation, in which case the latter backend is used for computations. Frameworks other than
Numpy cannot be mixed in the same operation.
.. code:: python
x = np.zeros((10, 20))
y = np.zeros((20, 30))
einx.dot("a [c1->c2]", x, torch.from_numpy(y)) # Uses torch
einx.dot("a [c1->c2]", x, jnp.asarray(y)) # Uses jax
einx.dot("a [c1->c2]", torch.from_numpy(x), jnp.asarray(y)) # Raises exception
Unkown tensor objects and python sequences are converted to tensors using calls from the respective backend if possible (e.g. ``np.asarray``, ``torch.asarray``).
.. code:: python
x = np.zeros((10, 20))
einx.add("a b, 1", x, [42.0]) python-einx-0.3.0/docs/source/faq/einops.rst 0000664 0000000 0000000 00000010325 15052160342 0021021 0 ustar 00root root 0000000 0000000 How is einx different from einops?
##################################
einx uses Einstein-inspired notation that is based on and compatible with the notation used in `einops `_,
but introduces several novel concepts that allow using it as a universal language for tensor operations:
* Introduction of ``[]``-notation to express vectorization of elementary operations (see :ref:`Bracket notation `).
* Ellipses repeat the preceding expression rather than an anonymous axis. This allows expressing multi-dimensional operations more concisely
(e.g. ``(a b)...`` or ``b (s [ds])... c``)
* Full composability of expressions: Axis lists, compositions, ellipses, brackets and concatenations can be nested arbitrarily (e.g. ``(a b)...`` or
``b (1 + (s...)) c``).
* Introduction of concatenations as first-class expressions.
The library provides the following additional features based on the einx notation:
* Support for many more tensor operations, for example:
.. code::
einx.flip("... (g [c])", x, c=2) # Flip pairs of values
einx.add("a, b -> a b", x, y) # Outer sum
einx.get_at("b [h w] c, b i [2] -> b i c", x, indices) # Gather values
einx.softmax("b q [k] h", attn) # Part of attention operation
* Simpler notation for existing tensor operations:
.. code::
einx.sum("a [b]", x)
# same op as
einops.reduce(x, "a b -> a", reduction="sum")
einx.mean("b (s [ds])... c", x, ds=2)
# einops does not support named ellipses. Alternative for 2D case:
einops.reduce(x, "b (h h2) (w w2) c -> b h w c", reduction="mean", h2=2, w2=2)
* Full support for rearranging expressions in all operations (see :doc:`How does einx handle input and output tensors? `).
.. code::
einx.dot("b q (h c), b k (h c) -> b q k h", q, k, h=16)
# Axis composition not supported e.g. in einops.einsum.
* ``einx.vmap`` and ``einx.vmap_with_axis`` allow applying arbitrary operations using einx notation.
* Several generalized deep learning modules in the ``einx.nn.*`` namespace (see :doc:`Tutorial: Neural networks `).
* Support for inspecting the backend calls made by einx in index-based notation (see :doc:`Just-in-time compilation `).
A non-exhaustive comparison of operations expressed in einx-notation and einops-notation:
.. list-table::
:widths: 50 60
:header-rows: 0
* - **einx**
- **einops**
* - .. code-block:: python
einx.mean("b [...] c", x)
- .. code-block:: python
einops.reduce(x, "b ... c -> b c", reduction="mean")
* - .. code-block:: python
einx.mean("b [...] c", x, keepdims=True)
- .. code-block:: python
# For 2D case:
einops.reduce(x, "b h w c -> b 1 1 c", reduction="mean")
* - .. code-block:: python
einx.mean("b (s [s2])... c", x, s2=2)
- .. code-block:: python
# For 2D case:
einops.reduce(x, "b (h h2) (w w2) c -> b h w c", reduction="mean", h2=2, w2=2)
* - .. code-block:: python
einx.dot("... [c1->c2]", x, w)
- .. code-block:: python
einops.einsum(x, w, "... c1, c1 c2 -> ... c2")
* - .. code-block:: python
einx.rearrange("h a, h -> h (a + 1)", x, y)
- .. code-block:: python
einops.pack([x, y], "h *")
* - .. code-block:: python
einx.rearrange("h (a + 1) -> h a, h 1 ", x)
- .. code-block:: python
einops.unpack(x, [[3], [1]], "h *")
* - .. code-block:: python
einx.rearrange("a c, 1 -> a (c + 1)", x, [42])
- Rearranging and broadcasting not supported in ``einops.pack``
* - .. code-block:: python
einx.dot("... (g [c1->c2])", x, w)
- Shape rearrangement not supported in ``einops.einsum``
* - .. code-block:: python
einx.add("... [c]", x, b)
- Elementwise operations not supported
* - .. code-block:: python
einx.rearrange("(a b) c -> c (a b)", x)
- Fails, since values for ``a`` and ``b`` cannot be determined
* - .. code-block:: python
einx.vmap("b [...] c -> b c", x, op=my_func)
- vmap not supported
python-einx-0.3.0/docs/source/faq/flatten.rst 0000664 0000000 0000000 00000004262 15052160342 0021164 0 ustar 00root root 0000000 0000000 How does einx handle input and output tensors?
##############################################
einx functions accept an operation string that specifies einx expressions for the input and output tensors. The expressions potentially
contain nested compositions and concatenations that prevent the backend functions from directly accessing the required axes. To resolve this, einx
first flattens the input tensors in each operation such that they contain only a flat list of axes. After the backend operation is applied, the
resulting tensors are unflattened to match the requested output expressions.
Compositions are flattened by applying a `reshape` operation:
.. code::
einx.rearrange("(a b) -> a b", x, a=10, b=20)
# same as
np.reshape(x, (10, 20))
Concatenations are flattened by splitting the input tensor into multiple tensors along the concatenated axis:
.. code::
einx.rearrange("(a + b) -> a, b", x, a=10, b=20)
# same as
np.split(x, [10], axis=0)
After the operation is applied to the flattened tensors, the results are reshaped and concatenated and missing axes are inserted and broadcasted
to match the requested output expressions.
When multiple input and output tensors are specified, einx tries to find a valid assignment between inputs and outputs for the given axis names. This
can sometimes lead to ambiguous assignments:
.. code::
# Broadcast and stack x and y along the last axis. x or y first?
einx.rearrange("a, b -> a b (1 + 1)", x, y)
To find an assignment, einx iterates over the outputs in the order they appear in the operation string, and for each output tries to find the first input
expression that allows for a successful assignment. In most cases, this leads to input and output expressions being assigned in the same order:
.. code::
einx.rearrange("a, b -> a b (1 + 1)", x, y)
# same as
np.stack([x, y], axis=-1)
The function :func:`einx.rearrange` can be used to perform flattening and unflattening of the input tensors as described in the operation string. Other functions
such as :func:`einx.vmap` and :func:`einx.dot` perform the same flattening and unflattening, in addition to applying an operation to the flattened tensors.
python-einx-0.3.0/docs/source/faq/solver.rst 0000664 0000000 0000000 00000015640 15052160342 0021043 0 ustar 00root root 0000000 0000000 How does einx parse expressions?
################################
Overview
--------
einx functions accept a operation string that specifies the shapes of input and output tensors and the requested operation in einx notation. For example:
.. code::
einx.mean("b (s [r])... c -> b s... c", x, r=4) # Mean-pooling with stride 4
To identify the backend operations that are required to execute this statement, einx first parses the operation string and determines an *expression tree*
for each input and output tensor. The tree represents a full description of the tensor's shape and axes marked with brackets. The nodes represent different types of
subexpressions such as axis lists, compositions, ellipses and concatenations. The leaves of the tree are the named and unnamed axes of the tensor. The expression trees
are used to determine the required rearranging steps and axes along which backend operations are applied.
einx uses a multi-step process to convert expression strings into expression trees:
* **Stage 0**: Split the operation string into separate expression strings for each tensor.
* **Stage 1**: Parse the expression string for each tensor and return a (stage-1) tree of nodes representing the nested subexpressions.
* **Stage 2**: Expand all ellipses by repeating the respective subexpression, resulting in a stage-2 tree.
* **Stage 3**: Determine a value for each axis (i.e. the axis length) using the provided constraints, resulting in a stage-3 tree, i.e. the final expression tree.
For a given operation string and signature of input arguments, the required backend operations are traced into graph representation and just-in-time compiled using Python's
`exec() `_. Every subsequent call with the same
signature will reuse the cached function and therefore incur no additional overhead other than for cache lookup (see
:doc:`Just-in-time compilation `).
Stage 0: Splitting the operation string
---------------------------------------
The operation string is first split into separate expression strings for each tensor. In the above example, this results in ``b (s [r])... c`` and ``b s... c``
for the input and output tensor, respectively. Inputs and outputs are separated by ``->``, and multiple tensors on each side are separated by ``,``. The order of the tensors
matches the order of the parameters and return values of the einx function.
Most functions also accept shorthand operation strings to avoid redundancy and facilitate more concise expressions. For example, in ``einx.mean`` the output expression can
be implicitly determined from the input expression by removing marked axes, and can therefore be omitted (see :func:`einx.reduce`):
.. code::
einx.mean("b (s [r])... c -> b s... c", x, r=4)
# same as
einx.mean("b (s [r])... c", x, r=4)
Another example of shorthand notation in :func:`einx.dot`:
.. code::
einx.dot("a b, b c -> a c", x, y)
# same as
einx.dot("a [b] -> a [c]", x, y)
# same as
einx.dot("a [b->c]", x, y)
See :doc:`Tutorial: Operations ` and the documentation of the respective functions for allowed shorthand notation.
Stage 1: Parsing the expression string
--------------------------------------
The expression string for each tensor is parsed into a (stage-1) expression tree using a simple lexer and parser. The tree is a nested structure of nodes that represent the different types of
subexpressions:
.. figure:: /images/stage1-tree.png
:width: 300
:align: center
Stage-1 tree for ``b (s [r])... c``.
This includes semantic checks, e.g. to ensure that axis names do not appear more than once per expression.
Stage 2: Expanding ellipses
---------------------------
To expand the ellipses in a stage-1 expression, einx first determines the *depth* of every axis, i.e. the number of ellipses that the axis is nested in. In the above expression,
``b`` and ``c`` have depth 0, while ``s`` and ``r`` have depth 1. einx ensures that the depth of axes is consistent over different expressions: E.g. an operation
``b s... c -> b s c`` would raise an exception.
In a second step, the *expansion* of all ellipses, i.e. the number of repetitions, is determined using the constraints provided by the input tensors. For example, given a tensor with
rank 4, the ellipsis in ``b (s [r])... c`` has an expansion of 2. einx ensures that the expansion of all axes is consistent over different expressions: E.g. an
operation ``s..., s... -> s...`` would raise an exception if the two input tensors have different rank.
The expression ``b (s [r])... c`` is expanded to ``b (s.0 [r.0]) (s.1 [r.1]) c`` for a 4D input tensor:
.. figure:: /images/stage2-tree.png
:height: 240
:align: center
Stage-2 tree for ``b (s [r])... c`` on input tensor with rank 4.
Parameters that are passed as additional constraints to the einx function, such as ``r=4`` in
.. code::
einx.mean("b (s [r])... c -> b s... c", x, r=4)
are included when solving for the depth and expansion of all expressions. Unlike the root
expressions describing the input tensors, these parameters can be given both in expanded (``r=(4, 4)``) and unexpanded form (``r=4``). In the first case, the values of ``r.0`` and ``r.1``
are defined explicitly and an additional constraint for the expansion of ``r`` is included. In the second case, the same value is used for the repetitions ``r.0`` and ``r.1``. This
extends to nested ellipses with depth > 1 analogously.
Stage 3: Determining axis values
--------------------------------
In the last step, the values of all axes (i.e. their lengths) are determined using the constraints provided by the input tensors and additional parameters. For example, the above
expression with an input tensor of shape ``(2, 4, 8, 3)`` and additional constraint ``r=4`` results in the following final expression tree:
.. figure:: /images/stage3-tree.png
:height: 240
:align: center
Stage-3 tree for ``b (s [r])... c`` for tensor with shape ``(2, 4, 8, 3)`` and constraint ``r=4``.
The value of axis lists and axis concatenations is determined as the product and sum of their children's values, respectively. An unnamed axis (i.e. a number in the expression string such as
``1``, ``16``) is treated as an axis with a new unique name and an additional constraint specifying its value.
Solver
------
einx uses a `SymPy `_-based solver to determine the depth and expansion of all expressions in stage 2, and the values of all axes in stage 3 by providing
equations representing the respective constraints.
Instead of directly applying the solver to these equations, einx first determines *equivalence classes* of axes that are known to have
the same value (from equations like ``a = b`` and ``a = 1``) and for each equivalence class passes a single variable to `SymPy `_.
This speeds up the solver and allows raising more expressive exceptions when conflicting constraints are found.
python-einx-0.3.0/docs/source/faq/universal.rst 0000664 0000000 0000000 00000012716 15052160342 0021542 0 ustar 00root root 0000000 0000000 How is einx notation universal?
###############################
To address this question, let's first look at how tensor operations are commonly expressed in existing tensor frameworks.
Classical notation
------------------
Tensor operations can be dissected into two distinct components:
1. An **elementary operation** that is performed.
* Example: ``np.sum`` computes a sum-reduction.
2. A division of the input tensor into sub-tensors. The elementary operation is applied to each sub-tensor independently. We refer to this as **vectorization**.
* Example: Sub-tensors in ``np.sum`` span the dimensions specified by the ``axis`` parameter. The sum-reduction is vectorized over all other dimensions.
In common tensor frameworks like Numpy, PyTorch, Tensorflow or Jax, different elementary operations are implemented with different vectorization rules.
For example, to express vectorization
* ``np.sum`` uses the ``axis`` parameter,
* ``np.add`` follows `implicit broadcasting rules `_ (e.g. in combination with ``np.newaxis``), and
* ``np.matmul`` provides `an implicit and custom set of rules `_.
Furthermore, an elementary operation is sometimes implemented in multiple APIs in order to offer vectorization rules for different use cases.
For example, the retrieve-at-index operation can be implemented in PyTorch using ``tensor[coords]``, ``torch.gather``, ``torch.index_select``, ``torch.take``,
``torch.take_along_dim``, which conceptually apply the same low-level operation, but follow different vectorization rules (see below).
Still, these interfaces sometimes do not cover all desirable use cases.
einx notation
-------------
einx provides an interface to tensor operations where vectorization is expressed entirely using einx notation, and each elementary operation
is represented by exactly one API. The einx notation is:
* **Consistent**: The same type of notation is used for all elementary operations. Each elementary operation is represented by exactly one API.
* **Complete**: Any operation that can be expressed with existing vectorization tools such as
`jax.vmap `_ can also be expressed in einx notation.
The following tables show examples of classical API calls that can be expressed using universal einx operations.
.. list-table:: Example: ``einx.get_at``
:widths: 42 58
:header-rows: 1
* - Classical API
- einx API
* - | ``torch.gather(x, 0, y)``
| ``torch.take_along_dim(x, y, dim=0)``
- ``einx.get_at("[_] b c, i b c -> i b c", x, y)``
* - | ``torch.gather(x, 1, y)``
| ``torch.take_along_dim(x, y, dim=1)``
- ``einx.get_at("a [_] c, a i c -> a i c", x, y)``
* - | ``torch.index_select(x, 0, y)``
| ``tf.gather(x, y, axis=0)``
- ``einx.get_at("[_] b c, i -> i b c", x, y)``
* - | ``torch.index_select(x, 1, y)``
| ``tf.gather(x, y, axis=1)``
- ``einx.get_at("a [_] c, i -> a i c", x, y)``
* - ``tf.gather(x, y, axis=1, batch_dims=1)``
- ``einx.get_at("a [_] c, a i -> a i c", x, y)``
* - ``torch.take(x, y)``
- ``einx.get_at("[_], ... -> ...", x, y)``
* - ``tf.gather_nd(x, y)``
- ``einx.get_at("[...], b [i] -> b", x, y)``
* - | ``tf.gather_nd(x, y, batch_dims=1)``
| ``x[y[..., 0], y[..., 1]]``
- ``einx.get_at("a [...], a b [i] -> a b", x, y)``
.. list-table:: Example: ``einx.dot`` (similar to einsum)
:widths: 42 58
:header-rows: 1
* - Classical API
- einx API
* - ``np.matmul(x, y)``
- | ``einx.dot("... a [b], ... [b] c -> ... a c", x, y)``
| ``einx.dot("... [a], [a] -> ...", x, y)``
* - ``np.dot(x, y)``
- | ``einx.dot("x... [a], y... [a] b -> x... y... b", x, y)``
| ``einx.dot("... [a], [a] -> ...", x, y)``
* - ``np.tensordot(x, y, axes=1)``
- ``einx.dot("a [b], [b] c -> a c", x, y)``
* - ``np.tensordot(x, y, axes=([2], [1]))``
- ``einx.dot("a b [c], d [c] e -> a b d e", x, y)``
* - ``np.inner(x, y)``
- ``einx.dot("x... [a], y... [a] -> x... y...", x, y)``
.. list-table:: Example: ``einx.multiply``
:widths: 42 58
:header-rows: 1
* - Classical API
- einx API
* - | ``np.multiply(x, y[:, np.newaxis])``
| ``x * y[:, np.newaxis]``
- ``einx.multiply("a b, a -> a b", x, y)``
* - ``np.outer(x, y)``
- ``einx.multiply("a, b -> a b", x, y)``
* - ``np.kron(x, y)``
- ``einx.multiply("a..., b... -> (a b)...", x, y),``
* - ``scipy.linalg.khatri_rao(x, y)``
- ``einx.multiply("a c, b c -> (a b) c", x, y)``
.. list-table:: Example: ``einx.flip``
:widths: 42 58
:header-rows: 1
* - Classical API
- einx API
* - | ``np.flip(x, y, axis=0)``
| ``np.flipud(x, y)``
- ``einx.flip("[a] b", x)``
* - ``np.fliplr(x, y)``
- ``einx.flip("a [b]", x)``
..
* - ``einx.rearrange``
- ``np.reshape`` ``np.transpose`` ``np.squeeze`` ``np.expand_dims`` ``tensor[np.newaxis]`` ``np.stack`` ``np.hstack`` ``np.concatenate``
While elementary operations and vectorization are decoupled conceptually to provide a universal API, the implementation of the operations
in the respective backend do not necessarily follow the same decoupling. For example, a matrix multiplication is represented as a vectorized
dot-product in einx (using ``einx.dot``), but still invokes an efficient matmul operation on the backend instead of a vectorized evaluation of the dot product. python-einx-0.3.0/docs/source/gettingstarted/ 0000775 0000000 0000000 00000000000 15052160342 0021252 5 ustar 00root root 0000000 0000000 python-einx-0.3.0/docs/source/gettingstarted/commonnnops.rst 0000664 0000000 0000000 00000011075 15052160342 0024356 0 ustar 00root root 0000000 0000000 Example: Common neural network operations
#########################################
einx allows formulating many common operations of deep learning models as concise expressions. This page provides a few examples.
.. code-block:: python
import einx
import einx.nn.{torch|flax|haiku|equinox|keras} as einn
LayerScale
----------
Multiply the input tensor ``x`` with a learnable parameter per channel that is initialized with a small value:
.. code-block:: python
x = einx.multiply("... [c]", x, einn.param(init=1e-5))
Reference: `LayerScale explained `_
Prepend class-token
-------------------
Flatten the spatial axes of an n-dimensional input tensor ``x`` and prepend a learnable class token:
.. code-block:: python
x = einx.rearrange("b s... c, c -> b (1 + (s...)) c", x, einn.param(name="class_token"))
Reference: `Classification token in Vision Transformer `_
Positional embedding
--------------------
Add a learnable positional embedding onto all tokens of the input ``x``. Works with n-dimensional inputs (text, image, video, ...):
.. code-block:: python
x = einx.add("b [s... c]", x, einn.param(name="pos_embed", init=nn.initializers.normal(stddev=0.02)))
Reference: `Position embeddings in Vision Transformer `_
Word embedding
--------------
Retrieve a learnable embedding vector for each token in the input sequence ``x``:
.. code-block:: python
x = einx.get_at("[v] c, b t -> b t c", einn.param(name="vocab_embed"), x, v=50257, c=1024)
Reference: `Torch tutorial on word embeddings `_
Layer normalization
-------------------
Compute the mean and variance along the channel axis, and normalize the tensor by subtracting the mean and dividing by the standard deviation.
Apply learnable scale and bias:
.. code-block:: python
mean = einx.mean("... [c]", x, keepdims=True)
var = einx.var("... [c]", x, keepdims=True)
x = (x - mean) * torch.rsqrt(var + epsilon)
x = einx.add("... [c]", x, einn.param(name="bias"))
x = einx.multiply("... [c]", x, einn.param(name="scale"))
This can similarly be achieved using the ``einn.Norm`` layer:
.. code-block:: python
import einx.nn.{torch|flax|haiku|...} as einn
x = einn.Norm("... [c]")(x)
Reference: `Layer normalization explained `_
Multihead attention
-------------------
Compute multihead attention for the queries ``q``, keys ``k`` and values ``v`` with ``h = 8`` heads:
.. code-block:: python
a = einx.dot("b q (h c), b k (h c) -> b q k h", q, k, h=8)
a = einx.softmax("b q [k] h", a)
x = einx.dot("b q k h, b k (h c) -> b q (h c)", a, v)
Reference: `Multi-Head Attention `_
Shifted window attention
------------------------
Shift and partition the input tensor ``x`` into windows with sidelength ``w``, compute self-attention in each window, and unshift and merge windows again. Works with
n-dimensional inputs (text, image, video, ...):
.. code-block:: python
# Compute axis values so we don't have to specify s and w manually later
consts = einx.solve("b (s w)... c", x, w=16)
# Shift and partition windows
x = einx.roll("b [...] c", x, shift=-shift)
x = einx.rearrange("b (s w)... c -> (b s...) (w...) c", x, **consts)
# Compute attention
...
# Unshift and merge windows
x = einx.rearrange("(b s...) (w...) c -> b (s w)... c", x, **consts)
x = einx.roll("b [...] c", x, shift=shift)
Reference: `Swin Transformer `_
Multilayer Perceptron along spatial axes (MLP-Mixer)
----------------------------------------------------
Apply a weight matrix multiplication along the spatial axes of the input tensor:
.. code-block:: python
x = einx.dot("b [s...->s2] c", x, einn.param(name="weight1"))
...
x = einx.dot("b [s2->s...] c", x, einn.param(name="weight2"), s=(256, 256))
Or with the ``einn.Linear`` layer that includes a bias term:
.. code-block:: python
x = einn.Linear("b [s...->s2] c")(x)
...
x = einn.Linear("b [s2->s...] c", s=(256, 256))(x)
Reference: `MLP-Mixer `_
The following page provides an example implementation of GPT-2 with ``einx`` and ``einn`` using many of these operations and validates
their correctness by loading pretrained weights and generating some example text. python-einx-0.3.0/docs/source/gettingstarted/gpt2.rst 0000664 0000000 0000000 00000016607 15052160342 0022672 0 ustar 00root root 0000000 0000000 Example: GPT-2
##############
We succeeded in taking that picture, and, if you look at it, you see a dot. That's here. That's home. That's us. On it, *we wrote, "We are the people."*
-- Carl Sagan & GPT-2
In this example, we will reimplement the GPT-2 architecture using einx and the deep learning framework `Haiku `_, load
pretrained weights from Hugging Face and validate the model by generating some text.
.. code-block:: python
import haiku as hk
import jax, einx
from functools import partial
import einx.nn.haiku as einn
import numpy as np
# Define some layer types we will use.
# 1. Use channels-last layout
# 2. Use layer normalization, and an epsilon of 1e-5 as in the original implementation
Linear = partial(einn.Linear, "... [_->channels]")
Norm = partial(einn.Norm, "... [c]", epsilon=1e-5)
The main building block of GPT-2 consists of multi-head self-attention and a multi-layer perceptron (MLP). Each sub-block uses a residual connection and
layer normalization at the beginning of the residual block:
.. code-block:: python
class Block(hk.Module):
heads: int = 25
mlp_ratio: int = 4
def __call__(self, x):
# ########### Attention block ###########
x0 = x
x = Norm()(x)
# Predict queries, keys and values
x = Linear(channels=3 * x.shape[-1])(x)
q, k, v = jnp.split(x, 3, axis=-1)
# Compute attention matrix over h heads
q = q * ((q.shape[-1] // self.heads) ** -0.5)
attn = einx.dot("b q (h c), b k (h c) -> b q k h", q, k, h=self.heads)
# Apply causal mask
mask = jnp.tril(jnp.ones((q.shape[1], q.shape[1]), dtype=bool))
attn = einx.where("q k, b q k h,", mask, attn, -jnp.inf)
# Apply softmax and compute weighted average over the input tokens
attn = einx.softmax("b q [k] h", attn)
x = einx.dot("b q k h, b k (h c) -> b q (h c)", attn, v)
# Output projection
x = Linear(channels=x.shape[-1])(x)
x = x + x0
# ########### MLP block ###########
x0 = x
x = Norm()(x)
x = Linear(channels=x.shape[-1] * self.mlp_ratio)(x)
x = jax.nn.gelu(x)
x = Linear(channels=x0.shape[-1])(x)
x = x + x0
return x
The multi-head attention requires no additional statements to split the channel axis into multiple heads or merge the heads back into a single axis.
We instead just specify the channels axis as an :ref:`axis composition ` of ``h`` heads and ``c`` channels per head:
.. code-block:: python
attn = einx.dot("b q (h c), b k (h c) -> b q k h", q, k, h=self.heads)
...
x = einx.dot("b q k h, b k (h c) -> b q (h c)", attn, v)
We can verify the correctness of these operations by inspecting the jit-compiled function:
>>> graph = einx.dot("b q (h c), b k (h c) -> b q k h", q, k, h=self.heads, graph=True)
>>> print(graph)
import jax.numpy as jnp
def op0(i0, i1):
x0 = jnp.reshape(i0, (1, 1024, 25, 64))
x1 = jnp.reshape(i1, (1, 1024, 25, 64))
x2 = jnp.einsum("abcd,aecd->abec", x0, x1)
return x2
The final GPT-2 model first embeds the input tokens and adds positional embeddings. It then applies a number of main blocks and maps the output onto next token
logits using a linear layer:
.. code-block:: python
class GPT2(hk.Module):
channels: int = 1600
depth: int = 48
vocab_size: int = 50257
block_size: int = 1024
def __call__(self, x):
# Word embedding: Retrieve embedding for each token from the word_embed table
x = einx.get_at("[v] c, b t -> b t c", einn.param(name="word_embed"), x, v=self.vocab_size, c=self.channels)
# Positional embedding
x = einx.add("b [t c]", x, einn.param(name="pos_embed", init=hk.initializers.RandomNormal(stddev=0.02)))
# Blocks
for i in range(self.depth):
x = Block(name=f"block{i}")(x)
x = Norm()(x)
# Classifier
x = Linear(channels=self.vocab_size, bias=False)(x)
return x
We use tensor factories with ``einn.param`` to construct the word and positional embeddings (see
:doc:`Tutorial: Neural networks `).
With this, we're done with the model definition. Next, we'll define some input data that the model will be applied to and encode it to token representation:
.. code-block:: python
text = ("We succeeded in taking that picture, and, if you look at it, you see a dot."
"That's here. That's home. That's us. On it,")
print(f"Input: {text}")
# Encode text to tokens
import tiktoken
encoder = tiktoken.get_encoding("gpt2")
tokens = np.asarray(encoder.encode_ordinary(text))
n = len(tokens)
# Pad tokens to input block size
tokens = np.pad(tokens, (0, GPT2.block_size - n), constant_values=0)
The model is initialized using a dummy batch (see `Haiku Basics `_):
.. code-block:: python
import time
rng = jax.random.PRNGKey(int(time.time() * 1000))
model = hk.transform(lambda x: GPT2()(x))
params = model.init(rng, tokens[np.newaxis]) # Add batch axis to tokens using np.newaxis
At this point, ``params`` contains only randomly initialized weights. We download the original model weights for the XL variant of GPT-2 from
`Hugging Face `_ and load them into our model using the
`weightbridge 🌉 `_ library:
.. code-block:: python
# Download original weights
import transformers # only used to download weights
pretrained_params = {k: np.asarray(v) for k, v in transformers.GPT2LMHeadModel.from_pretrained(f"gpt2-xl").state_dict().items()}
pretrained_params["lm_head.weight"] = np.transpose(pretrained_params["lm_head.weight"], (1, 0))
pretrained_params = {k: v for k, v in pretrained_params.items() if not k.endswith(".attn.bias") and not k.endswith(".attn.masked_bias")}
# Map weights to our model implementation
import weightbridge
params = weightbridge.adapt(pretrained_params, params, hints=[("norm_1", "ln_2")])
Finally, we can run several forward passes to predict next tokens:
.. code-block:: python
apply = jax.jit(model.apply) # Just-in-time compile the forward pass
temperature = 0.3
for _ in range(10): # Predict 10 next tokens
logits = apply(params, rng, tokens[np.newaxis])[0]
logits = logits[n - 1] # Get logits for next token
tokens[n] = jax.random.categorical(rng, logits / temperature) # Sample next token
n += 1
print(f"Prediction: {encoder.decode(tokens[:n])}")
Input:
We succeeded in taking that picture, and, if you look at it, you see a dot. That's here. That's home. That's us. On it,
Prediction:
We succeeded in taking that picture, and, if you look at it, you see a dot. That's here. That's home. That's us. On it, we wrote, "We are the people."
The `full example script can be found here `_, and a similar example script for the
`Mamba language model using Flax can be found here `_. python-einx-0.3.0/docs/source/gettingstarted/installation.rst 0000664 0000000 0000000 00000001301 15052160342 0024500 0 ustar 00root root 0000000 0000000 Installation
############
einx can be installed as follows:
.. code::
pip install einx
If you want to install the latest version from GitHub, you can do so using:
.. code::
pip install git+https://github.com/fferflo/einx.git
einx automatically detects backends like PyTorch when it is run, but does not include hard dependencies for the corresponding packages.
If you plan to use einx with a specific backend, you can also install it as follows:
.. code::
pip install einx[torch]
This will add a dependency for PyTorch and enforce the version requirements of einx (i.e. PyTorch >= 2.0.0).
This is currently only supported for PyTorch (``einx[torch]``) and Keras (``einx[keras]``). python-einx-0.3.0/docs/source/gettingstarted/introduction.rst 0000664 0000000 0000000 00000002177 15052160342 0024534 0 ustar 00root root 0000000 0000000 .. toctree::
:caption: Introduction
:maxdepth: 3
Introduction
############
einx is a Python library that provides a universal interface to formulate tensor operations in frameworks such as Numpy, PyTorch, Jax and Tensorflow.
The design is based on the following principles:
1. **Provide a set of elementary tensor operations** following Numpy-like naming: ``einx.{sum|max|where|add|dot|flip|get_at|...}``
2. **Use einx notation to express vectorization of the elementary operations.** The notation is inspired by `einops `_,
but introduces several novel concepts such as ``[]``-bracket notation and full composability that allow using it as a universal language for tensor operations.
einx can be integrated and mixed with existing code seamlessly. All operations are :doc:`just-in-time compiled `
into regular Python functions using Python's `exec() `_ and invoke operations from the respective framework.
**Next steps:**
- :doc:`Installation `
- :doc:`Tutorial `
python-einx-0.3.0/docs/source/gettingstarted/tutorial_neuralnetworks.rst 0000664 0000000 0000000 00000035377 15052160342 0027031 0 ustar 00root root 0000000 0000000 Tutorial: Neural networks
#########################
einx provides several neural network layer types for deep learning frameworks (`PyTorch `_, `Flax `_,
`Haiku `_, `Equinox `_, `Keras `_) in the ``einx.nn.*`` namespace
based on the functions in ``einx.*``. These layers provide abstractions that can implement a wide variety of deep learning operations using einx notation.
The ``einx.nn.*`` namespace is entirely optional, and is imported as follows:
.. code::
import einx.nn.{torch|flax|haiku|equinox|keras} as einn
Motivation
----------
The main idea for implementing layers in einx is to exploit :ref:`tensor factories ` to initialize the weights of a layer.
For example, consider the following linear layer:
.. code::
x = einx.dot("... [c1->c2]", x, w) # x * w
x = einx.add("... [c2]", x, b) # x + b
The arguments ``w`` and ``b`` represent the layer weights. Instead of determining the shapes of ``w`` and ``b`` in advance to create the weights manually,
we define ``w`` and ``b`` as tensor factories that
are called inside the einx functions once the shapes are determined. For example, in the Haiku framework ``hk.get_parameter`` is used to create new weights
in the current module and can be defined as a tensor factory as follows:
.. code::
import haiku as hk
class Linear(hk.Module):
def __call__(self, x):
w = lambda shape: hk.get_parameter(name="weight", shape=shape, dtype="float32", init=hk.initializers.VarianceScaling(1.0, "fan_in", "truncated_normal"))
b = lambda shape: hk.get_parameter(name="bias", shape=shape, dtype="float32", init=hk.initializers.Constant(0.0))
x = einx.dot("b... [c1->c2]", x, w, c2=64)
x = einx.add("b... [c2]", x, b)
return x
Unlike a tensor, the tensor factory does not provide shape constraints to the expression solver and requires that we define the missing axes (``c2``) manually. Here,
this corresponds to specifying the number of output channels of the linear layer. All other axis values are determined implicitly from the input shapes.
The weights are created once a layer is run on the first input batch. This is common practice in jax-based frameworks like Flax and Haiku where a model
is typically first invoked with a dummy batch to instantiate all weights.
In PyTorch, we rely on `lazy modules `_
by creating weights as ``torch.nn.parameter.UninitializedParameter`` in the constructor and calling their ``materialize`` method on the first input batch. This is
handled automatically by einx (see below).
Parameter definition with ``einn.param``
----------------------------------------
einx provides the function ``einn.param`` to create *parameter factories* for the respective deep learning framework. ``einn.param`` is simply a convenience wrapper for
the ``lambda shape: ...`` syntax that is used in the example above:
.. code:: python
# w1 and w2 give the same result when used as tensor factories in einx functions:
w1 = lambda shape: hk.get_parameter(name="weight", shape=shape, dtype="float32", init=...)
w2 = einn.param(name="weight", dtype="float32", init=...)
The utility of ``einn.param`` comes from providing several useful default arguments that simplify the definition of parameters:
* **Default argument for** ``init``
The type of (random) initialization that is used for a parameter in neural networks typically depends on the operation that the parameter is used in. For example:
* A bias parameter is used in an ``add`` operation and often initialized with zeros.
* A weight parameter in linear layers is used in a ``dot`` operation and initialized e.g. using
`Lecun normal initialization `_
based on the fan-in or fan-out of the layer.
* A scale parameter is used in a ``multiply`` operation and e.g. initialized with ones in normalization layers.
To allow ``einn.param`` to use a default initialization method based on the operation that it is used in, einx functions like :func:`einx.dot` and :func:`einx.add`
forward their name as optional arguments to tensor factories. ``einn.param`` then defines a corresponding initializer in the respective framework and
uses it as a default argument for ``init``. E.g. in Flax:
.. code:: python
from flax import linen as nn
if init == "get_at" or init == "rearrange":
init = nn.initializers.normal(stddev=0.02)
elif init == "add":
init = nn.initializers.zeros_init()
elif init == "multiply":
init = nn.initializers.ones_init()
elif init == "dot":
init = nn.initializers.lecun_normal(kwargs["in_axis"], kwargs["out_axis"], kwargs["batch_axis"])
:func:`einx.dot` additionally determines ``in_axis``, ``out_axis`` and ``batch_axis`` from the einx expression and forwards them as optional arguments
to tensor factories. In this case, they allow ``nn.initializers.lecun_normal`` to determine the fan-in of the layer and choose the initialization accordingly.
* **Default argument for** ``name``
A default name is determined implicitly from the operation that the parameter is used in, for example:
.. list-table::
:widths: 30 30
:header-rows: 0
* - Operation
- Name
* - :func:`einx.add`
- ``bias``
* - :func:`einx.multiply`
- ``scale``
* - :func:`einx.dot`
- ``weight``
* - :func:`einx.get_at`
- ``embedding``
* - :func:`einx.rearrange`
- ``embedding``
* **Default argument for** ``dtype``
The default data type of the parameter is determined from the ``dtype`` member variable of the respective module if it exists, and chosen as ``float32`` otherwise.
Any default argument in ``einn.param`` can be overridden by simply passing the respective argument explicitly:
.. code::
# Initialize bias with non-zero values
einx.add("b... [c]", x, einn.param(init=nn.initializers.normal(stddev=0.02)))
# Initialize layerscale with small value
einx.multiply("b... [c]", x, einn.param(init=1e-5, name="layerscale"))
If no default argument can be determined (e.g. because there is no default initialization for an operation, or the module does not have a ``dtype`` member) and the
argument is not specified explicitly in ``einn.param``, an exception is raised.
Example layer using ``einn.param``
----------------------------------
Our definition of a linear layer above that used the ``lambda shape: ...`` syntax can be simplified using ``einn.param`` as shown below.
**Haiku**
.. code:: python
import haiku as hk
class Linear(hk.Module):
dtype: str = "float32"
def __call__(self, x):
x = einx.dot("... [c1->c2]", x, einn.param(), c2=64)
x = einx.add("... [c2]", x, einn.param())
return x
In Haiku, ``hk.get_parameter`` and ``hk.get_state`` can be passed as the first parameter of ``einn.param`` to determine whether to create a parameter or state variable:
.. code:: python
einx.add("... [c]", x, einn.param(hk.get_parameter)) # calls einn.param(hk.get_parameter)
einx.add("... [c]", x, einn.param()) # calls einn.param(hk.get_parameter)
einx.add("... [c]", x, hk.get_parameter) # calls einn.param(hk.get_parameter)
einx.add("... [c]", x, einn.param(hk.get_state)) # calls einn.param(hk.get_state)
einx.add("... [c]", x, hk.get_state) # calls einn.param(hk.get_state)
**Flax**
.. code:: python
from flax import linen as nn
class Linear(nn.Module):
dtype: str = "float32"
def __call__(self, x):
x = einx.dot("... [c1->c2]", x, einn.param(self), c2=64)
x = einx.add("... [c2]", x, einn.param(self))
return x
In Flax, parameters are created by calling the ``self.param`` or ``self.variable`` method of the current module. For
convenience, einx provides several options to determine which one is used:
.. code:: python
einx.add("... [c]", x, einn.param(self.param)) # calls einn.param(self.param)
einx.add("... [c]", x, einn.param(self)) # calls einn.param(self.param)
einx.add("... [c]", x, self.param) # calls einn.param(self.param)
einx.add("... [c]", x, self) # calls einn.param(self.param)
einx.add("... [c]", x, einn.param(self.variable, col="stats")) # calls einn.param(self.variable, col="stats")
**PyTorch**
.. code::
import torch.nn as nn
class Linear(nn.Module):
def __init__(self):
super().__init__()
self.w = nn.parameter.UninitializedParameter(dtype=torch.float32)
self.b = nn.parameter.UninitializedParameter(dtype=torch.float32)
def forward(self, x):
x = einx.dot("b... [c1->c2]", x, self.w, c2=64)
x = einx.add("b... [c2]", x, self.b)
return x
In PyTorch, parameters have to be created in the constructor of the module as ``nn.parameter.UninitializedParameter`` and ``nn.parameter.UninitializedBuffer``
(see `lazy modules `_). They can
be passed to einx functions directly, or by using ``einn.param`` (e.g. to specify additional arguments):
.. code:: python
einx.add("... [c]", x, einn.param(self.w)) # calls einn.param(self.w)
einx.add("... [c]", x, self.w) # calls einn.param(self.w)
For PyTorch, ``einn.param`` does not support a ``dtype`` and ``name`` argument since these are specified in the constructor.
**Equinox**
.. code::
import equinox as eqx
class Linear(eqx.Module):
w: jax.Array
b: jax.Array
dtype: str = "float32"
def __init__(self):
self.w = None
self.b = None
def forward(self, x, rng=None):
x = einx.dot("b... [c1->c2]", x, einn.param(self, name="weight", rng=rng), c2=64)
x = einx.add("b... [c2]", x, einn.param(self, name="bias", rng=rng))
return x
In Equinox, parameters have to be specified as dataclass member variables of the module. In einx, these variables are set to ``None`` in the constructor and initialized in the
``__call__`` method instead by passing the module and member variable name to ``einn.param``. This initializes the parameter and stores it in the respective
member variable, such that the module can be used as a regular Equinox module. When a parameter is initialized randomly, it also requires passing a random key ``rng`` to
``einn.param`` on the first call:
.. code:: python
einx.add("... [c]", x, einn.param(self, rng=rng))
Stateful layers are currently not supported for Equinox, since they require the shape of the state variable to be known in the constructor.
**Keras**
.. code::
class Linear(einn.Layer):
def call(self, x):
x = einx.dot("b... [c1->c2]", x, einn.param(self, name="weight"), c2=64)
x = einx.add("b... [c2]", x, einn.param(self, name="bias"))
return x
In Keras, parameters can be created in a layer's ``build`` method instead of the ``__init__`` method, which gives access to the shapes of the layer's input arguments. The regular
forward-pass is defined in the ``call`` method. einx provides the base class ``einn.Layer`` which simply implements the ``build`` method to call the layer's ``call`` method
with dummy arguments and thereby initialize the layer parameters.
.. code:: python
einx.add("... [c]", x, einn.param(self))
Layers
------
einx provides the layer types ``einn.{Linear|Norm|Dropout}`` that are implemented as outlined above.
**einn.Norm** implements a normalization layer with optional exponential moving average (EMA) over the computed statistics. The first parameter is an einx expression for
the axes along which the statistics for normalization are computed. The second parameter is an einx expression for the axes corresponding to the bias and scale terms, and
defaults to ``b... [c]``. The different sub-steps can be toggled by passing ``True`` or ``False`` for the ``mean``, ``var``, ``scale`` and ``bias`` parameters. The EMA is used only if
``decay_rate`` is passed.
A variety of normalization layers can be implemented using this abstraction:
.. code::
layernorm = einn.Norm("b... [c]")
instancenorm = einn.Norm("b [s...] c")
groupnorm = einn.Norm("b [s...] (g [c])", g=8)
batchnorm = einn.Norm("[b...] c", decay_rate=0.9)
rmsnorm = einn.Norm("b... [c]", mean=False, bias=False)
**einn.Linear** implements a linear layer with optional bias term. The first parameter is an operation string that is forwarded to :func:`einx.dot` to multiply the weight matrix.
A bias is added corresponding to the marked output expressions, and is disabled by passing ``bias=False``.
.. code::
channel_mix = einn.Linear("b... [c1->c2]", c2=64)
spatial_mix1 = einn.Linear("b [s...->s2] c", s2=64)
spatial_mix2 = einn.Linear("b [s2->s...] c", s=(64, 64))
patch_embed = einn.Linear("b (s [s2->])... [c1->c2]", s2=4, c2=64)
**einn.Dropout** implements a stochastic dropout. The first parameter specifies the shape of the mask in einx notation that is applied to the input tensor.
.. code::
dropout = einn.Dropout("[...]", drop_rate=0.2)
spatial_dropout = einn.Dropout("[b] ... [c]", drop_rate=0.2)
droppath = einn.Dropout("[b] ...", drop_rate=0.2)
The following is an example of a simple fully-connected network for image classification using ``einn`` in Flax:
.. code::
from flax import linen as nn
import einx.nn.flax as einn
class Net(nn.Module):
@nn.compact
def __call__(self, x, training):
for c in [1024, 512, 256]:
x = einn.Linear("b [...->c]", c=c)(x)
x = einn.Norm("[b] c", decay_rate=0.99)(x, training=training)
x = nn.gelu(x)
x = einn.Dropout("[...]", drop_rate=0.2)(x, training=training)
x = einn.Linear("b [...->c]", c=10)(x) # 10 classes
return x
Example trainings on CIFAR10 are provided in ``examples/train_{torch|flax|haiku|equinox|keras}.py`` for models implemented using ``einn``. ``einn`` layers can be combined
with other layers or used as submodules in the respective framework seamlessly.
The following page provides examples of common operations in neural networks using ``einx`` and ``einn`` notation. python-einx-0.3.0/docs/source/gettingstarted/tutorial_notation.rst 0000664 0000000 0000000 00000031473 15052160342 0025572 0 ustar 00root root 0000000 0000000 Tutorial: Notation
#######################
This tutorial introduces the Einstein-inspired notation that is used in einx. It is based on and
compatible with the notation used in `einops `_, but
introduces several new concepts such as ``[]``-bracket notation, composable ellipses and axis
concatenations. See :doc:`How is einx different from einops? ` for a complete list
of differences.
Introduction
------------
An einx expression provides a description of the axes of a given tensor. In the simplest case, each dimension is given a unique name (``a``, ``b``, ``c``), and the names
are listed to form an einx expression:
>>> x = np.ones((2, 3, 4))
>>> einx.matches("a b c", x) # Check whether expression matches the tensor's shape
True
>>> einx.matches("a b", x)
False
einx expressions are used to formulate tensor operations such as reshaping and permuting axes in an intuitive way. Instead of defining an
operation in classical index-based notation
>>> y = np.transpose(x, (0, 2, 1))
>>> y.shape
(2, 4, 3)
we instead provide the input and output expressions in einx notation and let einx determine the necessary operations:
>>> y = einx.rearrange("a b c -> a c b", x)
>>> y.shape
(2, 4, 3)
The purpose of :func:`einx.rearrange` is to map tensors between different einx expressions. It does not perform any computation itself,
but rather forwards the computation to the respective backend, e.g. Numpy.
To verify that the correct backend calls are made, the just-in-time compiled function that einx invokes for this expression can be printed using ``graph=True``:
>>> graph = einx.rearrange("a b c -> a c b", x, graph=True)
>>> print(graph)
import numpy as np
def op0(i0):
x0 = np.transpose(i0, (0, 2, 1))
return x0
The function shows that einx performs the expected call to ``np.transpose``.
.. note::
einx traces the backend calls made for a given operation and just-in-time compiles them into a regular Python function using Python's
`exec() `_. When the function is called with the same signature of arguments,
the compiled function is reused and therefore incurs no additional overhead other than for cache lookup
(see :doc:`Just-in-time compilation `)
.. _axiscomposition:
Axis composition
----------------
Multiple axes can be wrapped in parentheses to indicate that they represent an *axis composition*.
>>> x = np.ones((6, 4))
>>> einx.matches("(a b) c", x)
True
The composition ``(a b)`` is an axis itself and comprises the subaxes ``a`` and ``b`` which are layed out in
`row-major order `_. This corresponds to ``a`` chunks of ``b`` elements each.
The length of the composed axis is the product of the subaxis lengths.
We can use :func:`einx.rearrange` to compose and decompose axes in a tensor by passing the respective einx expressions:
>>> # Stack 2 chunks of 3 elements into a single dimension with length 6
>>> x = np.ones((2, 3, 4))
>>> einx.rearrange("a b c -> (a b) c", x).shape
(6, 4)
>>> # Divide a dimension of length 6 into 2 chunks of 3 elements each
>>> x = np.ones((6, 4))
>>> einx.rearrange("(a b) c -> a b c", x, a=2).shape
(2, 3, 4)
Since the decomposition is ambiguous w.r.t. the values of ``a`` and ``b`` (for example ``a=2 b=3`` and ``a=1 b=6`` would be valid),
additional constraints have to be passed to find unique axis values, e.g. ``a=2`` as in the example above.
Composing and decomposing axes is a cheap operation and e.g. preferred over calling ``np.split``. The graph of these functions shows
that it uses a `np.reshape `_
operation with the requested shape:
>>> print(einx.rearrange("(a b) c -> a b c", x, a=2, graph=True))
import numpy as np
def op0(i0):
x0 = np.reshape(i0, (2, 3, 4))
return x0
>>> print(einx.rearrange("a b c -> (a b) c", x, graph=True))
import numpy as np
def op0(i0):
x0 = np.reshape(i0, (6, 4))
return x0
.. note::
See `this great einops tutorial `_ for hands-on
illustrations of axis composition using a batch of images.
Axis compositions are used for example to divide the channels of a tensor into equally sized groups (as in multi-headed attention),
or to divide an image into patches by decomposing the spatial dimensions (if the image resolution is evenly divisible by the patch size).
Ellipsis
--------
An *ellipsis* repeats the expression that appears directly in front of it:
>>> x = np.ones((2, 3, 4))
>>> einx.matches("a b...", x) # Expands to "a b.0 b.1"
True
The number of repetitions is determined from the rank of the input tensors:
>>> x = np.ones((2, 3, 4, 5))
>>> einx.matches("a b...", x) # Expands to "a b.0 b.1 b.2"
True
Using ellipses e.g. for spatial dimensions often results in simpler and more readable expressions, and allows using the same expression
for tensors with different dimensionality:
>>> # Divide an image into a list of patches with size p=8
>>> x = np.ones((256, 256, 3), dtype="uint8")
>>> einx.rearrange("(s p)... c -> (s...) p... c", x, p=8).shape
(1024, 8, 8, 3)
>>> # Divide a volume into a list of cubes with size p=8
>>> x = np.ones((256, 256, 256, 3), dtype="uint8")
>>> einx.rearrange("(s p)... c -> (s...) p... c", x, p=8).shape
(32768, 8, 8, 8, 3)
This operation requires multiple backend calls in index-based notation that might be difficult to understand on first glance.
The einx call on the other hand clearly conveys the intent of the operation and requires less code:
>>> print(einx.rearrange("(s p)... c -> (s...) p... c", x, p=8, graph=True))
import numpy as np
def op0(i0):
x0 = np.reshape(i0, (32, 8, 32, 8, 3))
x1 = np.transpose(x0, (0, 2, 1, 3, 4))
x2 = np.reshape(x1, (1024, 8, 8, 3))
return x2
In einops-style notation, an ellipsis always appears at root-level and is anonymous, i.e. does not have a preceding expression.
To be fully compatible with einops notation, einx implicitly converts anonymous ellipses by adding an axis in front:
.. code::
einx.rearrange("b ... -> ... b", x)
# same as
einx.rearrange("b _anonymous_ellipsis_axis... -> _anonymous_ellipsis_axis... b", x)
Unnamed axes
------------
An *unnamed axis* is a number in the einx expression and similar to using a new unique axis name with an additional constraint specifying its length:
>>> x = np.ones((2, 3, 4))
>>> einx.matches("2 b c", x)
True
>>> einx.matches("a b c", x, a=2)
True
>>> einx.matches("a 1 c", x)
False
Unnamed axes is used for example as an alternative to ``np.expand_dims``, ``np.squeeze``, ``np.newaxis``, ``np.broadcast_to``:
>>> x = np.ones((2, 1, 3))
>>> einx.rearrange("a 1 b -> 1 1 a b 1 5 6", x).shape
(1, 1, 2, 3, 1, 5, 6)
Since each unnamed axis is given a unique name, multiple unnamed axes do not refer to the same underlying tensor dimension. This can lead to unexpected behavior:
>>> einx.rearrange("a b c -> a c b", x).shape
(2, 4, 3)
>>> einx.rearrange("2 b c -> 2 c b", x).shape # Raises an exception
Concatenation
-------------
A *concatenation* represents an axis in einx notation along which two or more subtensors are concatenated. Using axis concatenations,
we can describe operations such as
`np.concatenate `_,
`np.split `_,
`np.stack `_,
`einops.pack and einops.unpack `_ in pure einx notation. A concatenation axis is marked with
``+`` and wrapped in parentheses, and its length is the sum of the subaxis lengths.
>>> x = np.ones((5, 4))
>>> einx.matches("(a + b) c", x)
True
This is used for example to concatenate tensors that do not have compatible dimensions:
>>> x = np.ones((256, 256, 3))
>>> y = np.ones((256, 256))
>>> einx.rearrange("h w c, h w -> h w (c + 1)", x, y).shape
(256, 256, 4)
The graph shows that einx first reshapes ``y`` by adding a channel dimension, and then concatenates the tensors along that axis:
>>> print(einx.rearrange("h w c, h w -> h w (c + 1)", x, y, graph=True))
import numpy as np
def op0(i0, i1):
x0 = np.reshape(i1, (256, 256, 1))
x1 = np.concatenate([i0, x0], axis=2)
return x1
Splitting is supported analogously:
>>> z = np.ones((256, 256, 4))
>>> x, y = einx.rearrange("h w (c + 1) -> h w c, h w", z)
>>> x.shape, y.shape
((256, 256, 3), (256, 256))
Unlike the index-based `np.concatenate `_, einx also broadcasts subtensors if required:
>>> # Append a number to all channels
>>> x = np.ones((256, 256, 3))
>>> einx.rearrange("... c, 1 -> ... (c + 1)", x, [42]).shape
(256, 256, 4)
Additional constraints
----------------------
einx uses a `SymPy `_-based solver to determine the values of named axes in Einstein expressions
(see :doc:`How does einx parse expressions? `).
In many cases, the shapes of the input tensors provide enough constraints to determine the values of all named axes in the solver.
For other cases, einx functions accept ``**parameters`` that are used to specify the values of some or all named axes and provide
additional constraints to the solver:
.. code::
x = np.zeros((10,))
einx.rearrange("(a b) -> a b", x) # Fails: Values of a and b cannot be determined
einx.rearrange("(a b) -> a b", x, a=5) # Succeeds: b determined by solver
einx.rearrange("(a b) -> a b", x, b=2) # Succeeds: a determined by solver
einx.rearrange("(a b) -> a b", x, a=5, b=2) # Succeeds
einx.rearrange("(a b) -> a b", x, a=5, b=5) # Fails: Conflicting constraints
.. _bracketnotation:
Bracket notation
----------------
einx introduces the ``[]``-notation to denote axes that an operation is applied to. This corresponds to the ``axis`` argument in index-based notation:
.. code::
einx.sum("a [b]", x)
# same as
np.sum(x, axis=1)
einx.sum("a [...]", x)
# same as
np.sum(x, axis=tuple(range(1, x.ndim)))
In general, brackets define which sub-tensors the given elementary operation is applied to. For example, the expression ``"a [b c] d"`` indicates
that the elementary operation ``einx.sum`` is applied to sub-tensors with shape ``b c`` and vectorized over axes ``a`` and ``d``:
.. code::
einx.sum ("a [b c] d", x)
# ^^^^^^^^ ^ ^^^^^ ^
# elementary operation vectorized axis sub-tensor axes vectorized axis
Some other examples:
.. code::
einx.flip("a [b]", x, c=2) # Flip pairs of values
einx.add("... [c]", x, b) # Add bias
einx.get_at("b [h w] c, b i [2] -> b i c", x, indices) # Gather values
einx.softmax("b q [k] h", attn) # Part of attention operation
Bracket notation is fully compatible with expression rearranging and can therefore be placed anywhere inside a nested einx expression:
>>> # Compute sum over pairs of values along the last axis
>>> x = np.ones((2, 2, 16))
>>> einx.sum("... (g [c])", x, c=2).shape
(2, 2, 8)
>>> # Mean-pooling with stride 4 (if evenly divisible)
>>> x = np.ones((4, 256, 256, 3))
>>> einx.mean("b (s [ds])... c", x, ds=4).shape
(4, 64, 64, 3)
>>> print(einx.mean("b (s [ds])... c", x, ds=4, graph=True))
import numpy as np
def op0(i0):
x0 = np.reshape(i0, (4, 64, 4, 64, 4, 3))
x1 = np.mean(x0, axis=(2, 4))
return x1
.. note::
See :doc:`How does einx handle input and output tensors? ` for details on how operations are applied to tensors with nested einx expressions.
Operations are sensitive to the positioning of brackets, e.g. allowing for flexible ``keepdims=True`` behavior out-of-the-box:
>>> x = np.ones((16, 4))
>>> einx.sum("b [c]", x).shape
(16,)
>>> einx.sum("b ([c])", x).shape
(16, 1)
>>> einx.sum("b [c]", x, keepdims=True).shape
(16, 1)
In the second example, ``c`` is reduced within the composition ``(c)``, resulting in an empty composition ``()``, i.e. a trivial axis with size 1.
Composability of ``->`` and ``,``
---------------------------------
The operators ``->`` and ``,`` that delimit input and output expressions in an operation can optionally be composed with the einx expressions themselves. If
they appear within a nested expression, the expression is expanded such that ``->`` and ``,`` appear only at the root
of the expression tree. For example:
.. code::
einx.{...}("a [b -> c]", x)
# expands to
einx.{...}("a [b] -> a [c]", x)
einx.{...}("b p [i,->]", x, y)
# expands to
einx.{...}("b p [i], b p -> b p", x, y)
einx provides a wide range of elementary tensor operations that accept arguments in einx notation as described in this document.
The following tutorial gives an overview of these functions and their usage.
python-einx-0.3.0/docs/source/gettingstarted/tutorial_ops.rst 0000664 0000000 0000000 00000032512 15052160342 0024533 0 ustar 00root root 0000000 0000000 Tutorial: Operations
####################
einx represents tensor operations using a set of elementary operations that are vectorized according to the given einx expressions.
Internally, einx does not implement the operations from scratch, but forwards computation to the respective backend, e.g. by
calling `np.reshape `_,
`np.transpose `_ or
`np.sum `_ with the appropriate arguments.
This tutorial gives an overview of these operations and their usage. For a complete list of provided functions, see the :doc:`API reference `.
Rearranging
-----------
The function :func:`einx.rearrange` transforms tensors between einx expressions by determining and applying the required backend operations. For example:
>>> x = np.ones((4, 256, 17))
>>> y, z = einx.rearrange("b (s p) (c + 1) -> (b s) p c, (b p) s 1", x, p=8)
>>> y.shape, z.shape
((128, 8, 16), (32, 32, 1))
Conceptually, this corresponds with a vectorized identity mapping. Using :func:`einx.rearrange` often produces more readable and concise code than
specifying backend operations in index-based notation directly. The index-based calls can be
inspected using the just-in-time compiled function that einx creates for this expression (see :doc:`Just-in-time compilation `):
>>> print(einx.rearrange("b (s p) (c + 1) -> (b s) p c, (b p) s 1", x, p=8, graph=True))
import numpy as np
def op0(i0):
x0 = np.reshape(i0, (4, 32, 8, 17))
x1 = np.reshape(x0[:, :, :, 0:16], (128, 8, 16))
x2 = np.reshape(x0[:, :, :, 16:17], (4, 32, 8))
x3 = np.transpose(x2, (0, 2, 1))
x4 = np.reshape(x3, (32, 32, 1))
return [x1, x4]
Reduction
---------
einx provides a family of elementary operations that reduce tensors along one or more axes. For example:
.. code::
einx.sum("a [b]", x)
# same as
np.sum(x, axis=1)
einx.mean("a [...]", x)
# same as
np.mean(x, axis=tuple(range(1, x.ndim)))
These functions are specializations of :func:`einx.reduce` and use backend operations like `np.sum `_,
`np.prod `_ or `np.any `_ as the ``op`` argument:
.. code::
einx.reduce("a [b]", x, op=np.sum)
# same as
einx.sum("a [b]", x)
In ``einx.sum``, the respective backend is determined implicitly from the input tensor (see :doc:`How does einx support different tensor frameworks? `).
Generally, the operation string represents both input and output expressions, and marks reduced axes using brackets:
>>> x = np.ones((16, 8, 4))
>>> einx.sum("a [b] c -> a c", x).shape
(16,)
Since the output of the elementary reduction operation is a scalar, no axis is marked in the output expression.
The following shorthand notation is supported:
* When no brackets are found, brackets are placed implicitly around all axes that do not appear in the output:
.. code::
einx.sum("a b c -> a c", x) # Expands to: "a [b] c -> a c"
* When no output is given, it is determined implicitly by removing marked subexpressions from the input:
.. code::
einx.sum("a [b] c", x) # Expands to: "a [b] c -> a c"
:func:`einx.reduce` also allows custom reduction operations that accept the ``axis`` argument similar to `np.sum `_:
.. code::
def custom_mean(x, axis):
return np.sum(x, axis=axis) / x.shape[axis]
einx.reduce("a [b] c", x, op=custom_mean)
:func:`einx.reduce` fully supports expression rearranging:
>>> x = np.ones((16, 8))
>>> einx.prod("a (b [c]) -> b a", x, c=2).shape
(4, 16)
Element-by-element
------------------
einx provides a family of elementary operations that apply element-by-element operations to tensors. For example:
.. code::
einx.add("a b, b -> a b", x, y)
# same as
x + y[np.newaxis, :]
einx.multiply("a, a b -> a b", x, y)
# same as
x[:, np.newaxis] * y
einx.subtract("a, (a b) -> b a", x, y)
# requires reshape and transpose in index-based notation
The elementary operations accept and return scalars and no axes are marked with ``[]``-brackets.
Internally, the inputs are rearranged such that the operation can be applied using `Numpy broadcasting rules `_.
These functions are specializations of :func:`einx.elementwise` and use backend operations like `np.add `_,
`np.logical_and `_ and `np.where `_
as the ``op`` argument:
.. code::
einx.elementwise("a b, b -> a b", x, y, op=np.add)
# same as
einx.add("a b, b -> a b", x, y)
Generally, the operation string of :func:`einx.elementwise` represents all input and output expressions explicitly:
>>> x = np.ones((16, 8))
>>> y = np.ones((16,))
>>> einx.add("a b, a -> a b", x, y).shape
(16, 8)
The following shorthand notation is supported:
* The output is determined implicitly if one of the input expressions contains the named axes of all other inputs and if this choice is unique:
.. code::
einx.add("a b, a", x, y) # Expands to: "a b, a -> a b"
einx.where("b a, b, a", x, y, z) # Expands to "b a, b, a -> b a"
einx.subtract("a b, b a", x, y) # Raises an exception
einx.add("a b, a b", x, y) # Expands to: "a b, a b -> a b"
* Bracket notation can be used to indicate that the second input is a subexpression of the first:
.. code::
einx.add("a [b]", x, y) # Expands to: "a b, b"
.. note::
Conceptually, a different elementary operation is used in this case which is applied to tensors of equal shape rather than just scalars.
This variant might be removed in future versions.
:func:`einx.elementwise` fully supports expression rearranging:
>>> x = np.ones((16, 16, 32))
>>> bias = np.ones((4,))
>>> einx.add("b... (g [c])", x, bias).shape
(16, 16, 32)
Indexing
--------
einx provides a family of elementary operations that perform multi-dimensional indexing and update/retrieve values from tensors at specific coordinates:
.. code::
image = np.ones((256, 256, 3))
coordinates = np.ones((100, 2), dtype=np.int32)
updates = np.ones((100, 3))
# Retrieve values at specific locations in an image
y = einx.get_at("[h w] c, i [2] -> i c", image, coordinates)
# same as
y = image[coordinates[:, 0], coordinates[:, 1]]
# Update values at specific locations in an image
y = einx.set_at("[h w] c, i [2], i c -> [h w] c", image, coordinates, updates)
# same as
image[coordinates[:, 0], coordinates[:, 1]] = updates
y = image
Brackets in the first input indicate axes that are indexed, and a single bracket in the second input indicates the coordinate axis. The length of the coordinate axis should equal
the number of indexed axes in the first input. Coordinates can also be passed in separate tensors:
.. code::
coordinates_x = np.ones((100,), dtype=np.int32)
coordinates_y = np.ones((100,), dtype=np.int32)
y = einx.get_at("[h w] c, i, i -> i c", image, coordinates_x, coordinates_y)
Indexing functions are specializations of :func:`einx.index` and fully support expression rearranging:
.. code::
einx.add_at("b ([h w]) c, ([2] b) i, c i -> c [h w] b", image, coordinates, updates)
Dot-product
-----------
The function :func:`einx.dot` computes a dot-product along the marked axes:
>>> # Matrix multiplication between x and y
>>> x = np.ones((4, 16))
>>> y = np.ones((16, 8))
>>> einx.dot("a [b], [b] c -> a c", x, y).shape
(4, 8)
While operations such as matrix multiplications are represented conceptually as a vectorized dot-products in einx, they are still implemented using
efficient matmul calls in the respective backend rather than a vectorized evaluation of the dot-product.
The interface of :func:`einx.dot` closely resembles the existing `np.einsum `_
which also uses Einstein-inspired notation to express matrix multiplications. In fact, :func:`einx.dot` internally forwards computation
to the ``einsum`` implementation of the respective backend, but additionally supports rearranging of expressions:
>>> # Simple grouped linear layer
>>> x = np.ones((20, 16))
>>> w = np.ones((8, 4))
>>> print(einx.dot("b (g c1), c1 c2 -> b (g c2)", x, w, g=2, graph=True))
import numpy as np
def op0(i0, i1):
x0 = np.reshape(i0, (20, 2, 8))
x1 = np.einsum("abc,cd->abd", x0, i1)
x2 = np.reshape(x1, (20, 8))
return x2
The following shorthand notation is supported:
* When no brackets are found, brackets are placed implicitly around all axes that do not appear in the output:
.. code::
einx.dot("a b, b c -> a c", x, y) # Expands to: "a [b], [b] c -> a c"
This allows using einsum-like notation with :func:`einx.dot`.
* When given two input tensors, the expression of the second input is determined implicitly by marking
its components in the input and output expression:
.. code::
einx.dot("a [b] -> a [c]", x, y) # Expands to: "a b, b c -> a c"
.. note::
Conceptually, the elementary operation in this case is not a simple dot-product, but rather a linear map from
``b`` to ``c`` channels, which motivates the usage of bracket notation in this manner.
Axes marked multiple times appear only once in the implicit second input expression:
.. code::
einx.dot("[a b] -> [a c]", x, y) # Expands to: "a b, a b c -> a c"
Other operations: ``vmap``
--------------------------
If an operation is not provided as a separate einx API, it can still be applied in einx using :func:`einx.vmap` or :func:`einx.vmap_with_axis`.
Both functions apply the same vectorization rules as other einx functions, but accept an ``op`` argument that specifies the elementary operation to apply.
In :func:`einx.vmap`, the input and output tensors of ``op`` match the marked axes in the input and output expressions:
.. code::
# A custom operation:
def op(x):
# Input: x has shape "b c"
x = np.sum(x, axis=1)
x = np.flip(x, axis=0)
# Output: x has shape "b"
return x
einx.vmap("a [b c] -> a [b]", x, op=op)
:func:`einx.vmap` is implemented using efficient automatic vectorization in the respective backend (e.g.
`jax.vmap `_, `torch.vmap `_).
einx also implements a simple ``vmap`` function for the Numpy backend for testing/ debugging purposes using a Python loop.
In :func:`einx.vmap_with_axis`, ``op`` is instead given an ``axis`` argument and must follow
`Numpy broadcasting rules `_:
.. code::
# A custom operation:
def op(x, axis):
# Input: x has shape "a b c", axis is (1, 2)
x = np.sum(x, axis=axis[1])
x = np.flip(x, axis=axis[0])
# Output: x has shape "b"
return x
einx.vmap_with_axis("(a [b c]) -> (a [b])", x, op=op, a=2, b=3, c=4)
Both :func:`einx.reduce` and :func:`einx.elementwise` are adaptations of :func:`einx.vmap_with_axis`.
Since most backend operations that accept an ``axis`` argument operate on the entire input tensor when ``axis`` is not given, :func:`einx.vmap_with_axis` can often
analogously be expressed using :func:`einx.vmap`:
>>> x = np.ones((4, 16))
>>> einx.vmap_with_axis("a [b] -> a", x, op=np.sum).shape
(4,)
>>> einx.vmap ("a [b] -> a", x, op=np.sum).shape
(4,)
>>> x = np.ones((4, 16))
>>> y = np.ones((4,))
>>> einx.vmap_with_axis("a b, a -> a b", x, y, op=np.add).shape
(4, 16)
>>> einx.vmap ("a b, a -> a b", x, y, op=np.add).shape
(4, 16)
:func:`einx.vmap` provides more general vectorization capabilities than :func:`einx.vmap_with_axis`, but might in some cases be slower if the latter relies on a
specialized implementation.
.. _lazytensorconstruction:
Misc: Tensor factories
----------------------------
All einx operations also accept tensor factories instead of tensors as arguments. A tensor factory is a function that accepts a ``shape``
argument and returns a tensor with that shape. This allows deferring the construction of a tensor to the point inside
an einx operation where its shape has been resolved, and avoids having to manually determine the shape in advance:
.. code::
einx.dot("b... c1, c1 c2 -> b... c2", x, lambda shape: np.random.uniform(shape), c2=32)
In this example, the shape of ``x`` is used by the expression solver to determine the values of ``b...`` and ``c1``. Since the tensor factory provides no shape
constraints to the solver, the remaining axis values have to be specified explicitly, i.e. ``c2=32``.
Tensor factories are particularly useful in the context of deep learning modules: The shapes of a layer's weights are typically chosen to align with the shapes
of the layer input and outputs (e.g. the number of input channels in a linear layer must match the corresponding axis in the layer's weight matrix).
This can be achieved implicitly by constructing layer weights using tensor factories.
The following tutorial describes in more detail how this is used in einx to implement deep learning models. python-einx-0.3.0/docs/source/gettingstarted/tutorial_overview.rst 0000664 0000000 0000000 00000003012 15052160342 0025571 0 ustar 00root root 0000000 0000000 Tutorial: Overview
##################
einx provides a universal interface to formulate tensor operations as concise expressions in frameworks such as
Numpy, PyTorch, Tensorflow and Jax. This tutorial will introduce the main concepts of Einstein-inspired notation
(or *einx notation*) and how it is used as a universal language for expressing tensor operations.
An einx expression is a string that represents the axis names of a tensor. For example, given the tensor
>>> import numpy as np
>>> x = np.ones((2, 3, 4))
we can name its dimensions ``a``, ``b`` and ``c``:
>>> import einx
>>> einx.matches("a b c", x) # Check whether expression matches the tensor's shape
True
>>> einx.matches("a b", x)
False
The purpose of einx expressions is to specify how tensor operations will be applied to the input tensors:
>>> np.sum(x, axis=1)
>>> # same as
>>> einx.sum("a [b] c", x)
Here, ``einx.sum`` represents the elementary *sum-reduction* operation that is computed. The expression ``a [b] c`` specifies
that it is applied to sub-tensors
spanning the ``b`` axis, and vectorized over axes ``a`` and ``c``. This is an example of the general paradigm
for formulating complex tensor operations with einx:
1. Provide a set of elementary tensor operations such as ``einx.{sum|max|where|add|dot|flip|get_at|...}``.
2. Use einx notation as a universal language to express vectorization of the elementary ops.
The following tutorials will give a deeper dive into einx expressions and how they are used to express a large variety of tensor operations. python-einx-0.3.0/docs/source/images/ 0000775 0000000 0000000 00000000000 15052160342 0017467 5 ustar 00root root 0000000 0000000 python-einx-0.3.0/docs/source/images/solver.drawio 0000775 0000000 0000000 00000067646 15052160342 0022236 0 ustar 00root root 0000000 0000000
python-einx-0.3.0/docs/source/images/stage1-tree.png 0000775 0000000 0000000 00000126574 15052160342 0022340 0 ustar 00root root 0000000 0000000 ‰PNG
IHDR Ä x ¤!ù IDATx^ìÝ
Ð,U}'þ¾FÔø–¨¼¨Ñ¸5ˆZ² Qy‹1†·M"1²°ˆbX¦L@
¬w7
îEM‚Q1k%ªD‘`t
æ.ˆAPW1P(4»F£ÂE’ø#úÁyœ™§gætOwŸÓ}>·Š2¹·û¼|~ç™îïsf¦·Ýyן½À=ú9š ¶ö:ü¶šÒ$,°M N¸:†F€ â155zxô%6AlÄ2ˆ3)´i¶" ·Â¨$/ '_"$@€@;q;ŽZÉC@ ΣÎfI€ Ø @€@&óx¯·g2sÓ$°ZàÆ«Î˜9H ^mæŒA@ CÍ q ’C²ˆ³-½‰ ¹€@œù0}òˆó©µ™Öˆë›9ƒ cˆÇPEs @€@€€@€älâlKoâd. g¾ LŸ |â|jm¦õâúfÎ @€Àâ1TÑ 9$[8ÛÒ›8™Ä™/ Ó'@ 8ŸZ›i}¸¾™3 0xU4ÄHÉV@ ζô&N€@æqæÀô ÈG@ ΧÖfZ_@ ®oæŒA@ CÍ q ’C²ˆ³-½‰ ¹€@œù0}òˆó©µ™Öˆë›9ƒ cˆÇPEs @€@€€@€älâlKoâd. g¾ LŸ |â|jm¦õâúfÎ @€Àâ1TÑ 9$[8ÛÒ›8™Ä™/ Ó'@ 8ŸZ›i}¸¾™3 0xU4ÄHÉV@ ζô&N€@æqæÀô ÈG@ ΧÖfZ_@ ®oæŒA@ CÍ q ’C²ˆ³-½‰ ¹€@œù0}òˆó©µ™Öˆë›9ƒ cˆÇPEs @€@€@ÎøE'ž^œsÞE3J×^y~±ÏÞ{ÈÕ?dÛŸ2sÒ»Î}]qÌ‘‡Öo¨§3.xÏŽ¤Ç׃@܇²> ž€@œ^MŒˆ ÄñüÂúÔu7ûtl±ûn.nùü¥¬»¡4*¥RÆI€ vâv=µF€ dbxzqî½ÿ1Åõ7ܼñWqQÄɾt:ˆ;åÕ8ÒˆâÉj|íç§½ö›‹S ˆÓy¥2ô+ ÷ë7DˆûÄъر@¼ÊqàâqF& ¬ ¦C€ E±@l‡xñëƒ@쵓 y
ÄyÖݬ ÈP@ ˆb8×>S&@€ÀRØ!@€@&±@,Ä™¼Ü™&‚â`* @`Øñpñüg|çWÞºÏ5öâ?ËÞ2=ì×7£'@€@S¸©œó 00¸¿@¼íO™Y¡ö°çœTì¸üšà•õ¤'>¶¸îꂎщ§çœ7k°ìÄk¯<¿Øgï=ƒÚÃAñªh¨/ ×7s) §ˆ÷xô!Å;oo´¶B«@¼œV n´ôœD€ Áă/¡ @ L@ N7ï½ÿ1Åõ7ÜVÈŠ£Bž#,Ę 0bxÄÅ55LÄiâÞ³£xÞñ¯Ú²X½ÍzÑÛªO?õ%Å©ÛOX¸èbØ+"lˆ
d" §ˆçnÈnoUˆ®óyb_ªµõ‡Þ[¦3y!4MÌ Ä–2ˆÓÄóŸ^µÓ;Y®U;Åw~ýA«Y ˆƒŠƒ €@œA‘M‘ ¥€@<®@<½K„'? ±@ìU‘ îˆd" #‡¼ezÝ%+Äë®!ç @`,ñX*iX! §ˆ}IV賋›,|X n²nœC€ 1
Äc¬ª9 @ B@ N3/ú–ééÖù¬Å/Ä!ëÄ1ÈA@ Î¡ÊæH€ »â4q¹8ë>‡øÚ+Ï/öÙ{ÏÆëZ ˆ/' @`dñÈ
j:X$ §ˆ›„âòœ¦Ÿ7ˆb¯” p·€@l% @ 8í@\.÷OW-×CŸµ_qÉûÎ^ɱ@¼XH€ ‘Ä#/°é @`" §ˆ§WkÝ·Q× Å±@ì•‘ îˆd" +O/ËxzqÎy³ã¯Z¶¡ßL-Ä™¼ì™&V
Ä+‰@€ qÄÃ
Äó+pGRܺóö-3ôÛ¨bx¯jfA€ õâõ
µ@€ AÄã Ä“·íO™Y{¡_²%ăxÑ2Hô ÷€¬¤ §ˆ'54ÐNÖÓ|°-ÿþίbårˆâ•‹ÄÈD@ ΤЦI€ 8½@\hë