electrum_aionostr-0.1.0/0000755000175000017500000000000007203224517013766 5ustar useruserelectrum_aionostr-0.1.0/LICENSE0000644000175000017500000000303707203224517014776 0ustar useruser BSD-3-Clause License Copyright (c) 2023, Dave St.Germain Copyright (c) 2024-2025 The Electrum developers All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. electrum_aionostr-0.1.0/PKG-INFO0000644000175000017500000000545607203224517015075 0ustar useruserMetadata-Version: 2.4 Name: electrum-aionostr Version: 0.1.0 Summary: asyncio nostr client Author: The Electrum developers License-Expression: BSD-3-Clause Project-URL: Homepage, https://github.com/spesmilo/electrum-aionostr Project-URL: Repository, https://github.com/spesmilo/electrum-aionostr Keywords: nostr,asyncio Classifier: Development Status :: 2 - Pre-Alpha Classifier: Intended Audience :: Developers Classifier: Natural Language :: English Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Requires-Python: >=3.10 Description-Content-Type: text/markdown License-File: LICENSE Requires-Dist: electrum_ecc Requires-Dist: aiohttp<4.0.0,>=3.11.0 Requires-Dist: aiohttp_socks>=0.9.2 Requires-Dist: aiorpcx<0.26,>=0.22.0 Provides-Extra: crypto Requires-Dist: cryptography>=2.8; extra == "crypto" Provides-Extra: tests Requires-Dist: pytest-cov; extra == "tests" Requires-Dist: Click>=8.2; extra == "tests" Provides-Extra: cli Requires-Dist: Click; extra == "cli" Dynamic: license-file # electrum-aionostr asyncio nostr client ``` Free software: BSD license Original Author: Dave St.Germain Fork Author/Maintainer: The Electrum Developers Language: Python (>= 3.10) ``` [![Latest PyPI package](https://badge.fury.io/py/electrum-aionostr.svg)](https://pypi.org/project/electrum-aionostr/) [![Build Status](https://api.cirrus-ci.com/github/spesmilo/electrum-aionostr.svg)](https://cirrus-ci.com/github/spesmilo/electrum-aionostr) This is a fork of [aionostr](https://github.com/davestgermain/aionostr) that does not require Coincurve. ## Getting started ``` $ python3 -m pip install --user ".[crypto]" ``` ## Features * Retrieve anything from the nostr network, using one command: ``` $ aionostr get nprofile1qqsv0knzz56gtm8mrdjhjtreecl7dl8xa47caafkevfp67svwvhf9hcpz3mhxue69uhkgetnvd5x7mmvd9hxwtn4wvspak3h $ aionostr get -v nevent1qqsxpnzhw2ddf2uplsxgc5ctr9h6t65qaalzvzf0hvljwrz8q64637spp3mhxue69uhkyunz9e5k75j6gxm $ aionostr query -s -q '{"kinds": [1], "limit":10}' $ aionostr send --kind 1 --content test --private-key $ aionostr mirror -r wss://source.relay -t wss://target.relay --verbose '{"kinds": [4]}' ``` Set environment variables: ``` NOSTR_RELAYS=wss://brb.io,wss://nostr.mom NOSTR_KEY=`aionostr gen | head -1` ``` ### Maintainer notes Release checklist: - bump `__version__` in `__init__.py` - write changelog in [`docs/history.md`](docs/history.md) - `$ git tag -s $VERSION -m "$VERSION"` - `$ git push "$REMOTE_ORIGIN" tag "$VERSION"` - build sdist (see [`contrib/sdist/`](contrib/sdist)): - `$ ELECBUILD_COMMIT=HEAD ELECBUILD_NOCACHE=1 ./contrib/sdist/build.sh` - `$ python3 -m twine upload dist/$DISTNAME` electrum_aionostr-0.1.0/README.md0000644000175000017500000000320407203224517015244 0ustar useruser# electrum-aionostr asyncio nostr client ``` Free software: BSD license Original Author: Dave St.Germain Fork Author/Maintainer: The Electrum Developers Language: Python (>= 3.10) ``` [![Latest PyPI package](https://badge.fury.io/py/electrum-aionostr.svg)](https://pypi.org/project/electrum-aionostr/) [![Build Status](https://api.cirrus-ci.com/github/spesmilo/electrum-aionostr.svg)](https://cirrus-ci.com/github/spesmilo/electrum-aionostr) This is a fork of [aionostr](https://github.com/davestgermain/aionostr) that does not require Coincurve. ## Getting started ``` $ python3 -m pip install --user ".[crypto]" ``` ## Features * Retrieve anything from the nostr network, using one command: ``` $ aionostr get nprofile1qqsv0knzz56gtm8mrdjhjtreecl7dl8xa47caafkevfp67svwvhf9hcpz3mhxue69uhkgetnvd5x7mmvd9hxwtn4wvspak3h $ aionostr get -v nevent1qqsxpnzhw2ddf2uplsxgc5ctr9h6t65qaalzvzf0hvljwrz8q64637spp3mhxue69uhkyunz9e5k75j6gxm $ aionostr query -s -q '{"kinds": [1], "limit":10}' $ aionostr send --kind 1 --content test --private-key $ aionostr mirror -r wss://source.relay -t wss://target.relay --verbose '{"kinds": [4]}' ``` Set environment variables: ``` NOSTR_RELAYS=wss://brb.io,wss://nostr.mom NOSTR_KEY=`aionostr gen | head -1` ``` ### Maintainer notes Release checklist: - bump `__version__` in `__init__.py` - write changelog in [`docs/history.md`](docs/history.md) - `$ git tag -s $VERSION -m "$VERSION"` - `$ git push "$REMOTE_ORIGIN" tag "$VERSION"` - build sdist (see [`contrib/sdist/`](contrib/sdist)): - `$ ELECBUILD_COMMIT=HEAD ELECBUILD_NOCACHE=1 ./contrib/sdist/build.sh` - `$ python3 -m twine upload dist/$DISTNAME` electrum_aionostr-0.1.0/pyproject.toml0000644000175000017500000000246307203224517016707 0ustar useruser[build-system] requires = ["setuptools >= 61.0.0"] build-backend = "setuptools.build_meta" [project] name = "electrum-aionostr" authors = [ { name = "The Electrum developers" }, ] description = "asyncio nostr client" keywords = ["nostr", "asyncio"] readme = "README.md" license = "BSD-3-Clause" license-files = ["LICENSE"] requires-python = ">=3.10" dependencies = [ "electrum_ecc", "aiohttp>=3.11.0,<4.0.0", "aiohttp_socks>=0.9.2", "aiorpcx>=0.22.0,<0.26", # for taskgroup. remove when we use python 3.11 ] classifiers = [ "Development Status :: 2 - Pre-Alpha", "Intended Audience :: Developers", "Natural Language :: English", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", ] dynamic = ['version'] [project.optional-dependencies] crypto = [ "cryptography>=2.8", ] tests = [ "pytest-cov", "Click>=8.2", ] cli = [ "Click", ] [project.urls] Homepage = "https://github.com/spesmilo/electrum-aionostr" Repository = "https://github.com/spesmilo/electrum-aionostr" [project.scripts] aionostr = "electrum_aionostr.cli:main" [tool.setuptools.dynamic] version = { attr = 'electrum_aionostr.__version__' } electrum_aionostr-0.1.0/setup.cfg0000644000175000017500000000004607203224517015607 0ustar useruser[egg_info] tag_build = tag_date = 0 electrum_aionostr-0.1.0/src/0000755000175000017500000000000007203224517014555 5ustar useruserelectrum_aionostr-0.1.0/src/electrum_aionostr/0000755000175000017500000000000007203224517020313 5ustar useruserelectrum_aionostr-0.1.0/src/electrum_aionostr/__init__.py0000644000175000017500000001164107203224517022427 0ustar useruser"""Top-level package for aionostr.""" __author__ = """The Electrum Developers""" __version__ = '0.1.0' import time from typing import Optional, List, Any from .relay import Manager, Relay async def get_anything(anything:str, relays=None, verbose=False, stream=False, origin='aionostr', private_key=None): """ Return anything from the nostr network anything: event id, nprofile, nevent, npub, nsec, or query To stream events, set stream=True. This will return an asyncio.Queue to retrieve events from """ from .util import from_nip19, NIP19_PREFIXES query = None single_event = False if isinstance(anything, list): if anything[0] == 'REQ': query = anything[2] else: raise NotImplementedError(anything) elif isinstance(anything, dict): query = anything elif anything.strip().startswith('{'): from json import loads query = loads(anything) elif anything.startswith(NIP19_PREFIXES): anything = anything.replace('nostr:', '', 1) obj = from_nip19(anything) if obj['type'] in ('npub', 'nsec'): return obj['object'].hex() else: relays = obj['relays'] or relays if obj['type'] == 'nprofile': query = {"kinds": [0], "authors": [obj['object']]} elif obj['type'] == 'nrelay': return obj['object'] elif obj['type'] == 'naddr': query = {} if obj['object']: query['#d'] = [obj['object']], if 'kind' in obj: query['kinds'] = [obj['kind']] if 'author' in obj: query['authors'] = [obj['author']] elif obj['object']: query = {"ids": [obj['object']]} single_event = True else: raise NotImplementedError(obj[0]) else: query = {"ids": [anything]} single_event = True if verbose: import sys sys.stderr.write(f"Retrieving {query} from {relays}\n") if query: if not relays: raise NotImplementedError("No relays to use") man = Manager(relays, origin=origin, private_key=private_key) if not stream: async with man: return [event async for event in man.get_events(query, single_event=single_event, only_stored=True)] else: import asyncio queue = asyncio.Queue() async def _task(): async with man: async for event in man.get_events(query, single_event=single_event, only_stored=False): await queue.put(event) asyncio.create_task(_task()) return queue async def _add_event(manager, event:dict=None, private_key='', kind=1, pubkey='', content='', created_at=None, tags=None, direct_message=''): """ Add an event to the network, using the given relays event can be specified (as a dict) or will be created from the passed in parameters """ if not event: from .key import PrivateKey from .event import Event from .util import from_nip19 created_at = created_at or int(time.time()) tags = tags or [] if not private_key: raise Exception("Missing private key") if private_key.startswith('nsec'): private_key = from_nip19(private_key)['object'].hex() prikey = PrivateKey(bytes.fromhex(private_key)) if not pubkey: pubkey = prikey.public_key.hex() if direct_message: dm_pubkey = from_nip19(direct_message)['object'].hex() if direct_message.startswith('npub') else direct_message tags.append(['p', dm_pubkey]) kind = 4 content = prikey.encrypt_message(content, dm_pubkey) event = Event(pubkey=pubkey, content=content, created_at=created_at, tags=tags, kind=kind) event = event.sign(prikey.hex()) event_id = event.id else: event_id = event['id'] result = await manager.add_event(event) return event_id async def add_event( relays, event: Optional[dict] = None, private_key: Optional[str] = '', kind: Optional[int] = 1, pubkey: Optional[str] = '', content: Optional[str] = '', created_at: Optional[int] = None, tags: Optional[List[List[Any]]] = None, direct_message: Optional[str] = '') -> str: async with Manager(relays, private_key=private_key) as man: return await _add_event( man, event=event, private_key=private_key, kind=kind, pubkey=pubkey, content=content, created_at=created_at, tags=tags, direct_message=direct_message) async def add_events(relays, event_iterator): async with Manager(relays) as man: for event in event_iterator: await man.add_event(event) electrum_aionostr-0.1.0/src/electrum_aionostr/bech32.py0000644000175000017500000001164207203224517021737 0ustar useruser# Copyright (c) 2017, 2020 Pieter Wuille # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """Reference implementation for Bech32/Bech32m and segwit addresses.""" from enum import Enum class Encoding(Enum): """Enumeration type to list the various supported encodings.""" BECH32 = 1 BECH32M = 2 CHARSET = "qpzry9x8gf2tvdw0s3jn54khce6mua7l" BECH32M_CONST = 0x2bc830a3 def bech32_polymod(values): """Internal function that computes the Bech32 checksum.""" generator = [0x3b6a57b2, 0x26508e6d, 0x1ea119fa, 0x3d4233dd, 0x2a1462b3] chk = 1 for value in values: top = chk >> 25 chk = (chk & 0x1ffffff) << 5 ^ value for i in range(5): chk ^= generator[i] if ((top >> i) & 1) else 0 return chk def bech32_hrp_expand(hrp): """Expand the HRP into values for checksum computation.""" return [ord(x) >> 5 for x in hrp] + [0] + [ord(x) & 31 for x in hrp] def bech32_verify_checksum(hrp, data): """Verify a checksum given HRP and converted data characters.""" const = bech32_polymod(bech32_hrp_expand(hrp) + data) if const == 1: return Encoding.BECH32 if const == BECH32M_CONST: return Encoding.BECH32M return None def bech32_create_checksum(hrp, data, spec): """Compute the checksum values given HRP and data.""" values = bech32_hrp_expand(hrp) + data const = BECH32M_CONST if spec == Encoding.BECH32M else 1 polymod = bech32_polymod(values + [0, 0, 0, 0, 0, 0]) ^ const return [(polymod >> 5 * (5 - i)) & 31 for i in range(6)] def bech32_encode(hrp, data, spec): """Compute a Bech32 string given HRP and data values.""" combined = data + bech32_create_checksum(hrp, data, spec) return hrp + '1' + ''.join([CHARSET[d] for d in combined]) def bech32_decode(bech): """Validate a Bech32/Bech32m string, and determine HRP and data.""" if ((any(ord(x) < 33 or ord(x) > 126 for x in bech)) or (bech.lower() != bech and bech.upper() != bech)): return (None, None, None) bech = bech.lower() pos = bech.rfind('1') # if pos < 1 or pos + 7 > len(bech) or len(bech) > 90: # return (None, None, None) if not all(x in CHARSET for x in bech[pos+1:]): return (None, None, None) hrp = bech[:pos] data = [CHARSET.find(x) for x in bech[pos+1:]] spec = bech32_verify_checksum(hrp, data) if spec is None: return (None, None, None) return (hrp, data[:-6], spec) def convertbits(data, frombits, tobits, pad=True): """General power-of-2 base conversion.""" acc = 0 bits = 0 ret = [] maxv = (1 << tobits) - 1 max_acc = (1 << (frombits + tobits - 1)) - 1 for value in data: if value < 0 or (value >> frombits): return None acc = ((acc << frombits) | value) & max_acc bits += frombits while bits >= tobits: bits -= tobits ret.append((acc >> bits) & maxv) if pad: if bits: ret.append((acc << (tobits - bits)) & maxv) elif bits >= frombits or ((acc << (tobits - bits)) & maxv): return None return ret def decode(hrp, addr): """Decode a segwit address.""" hrpgot, data, spec = bech32_decode(addr) if hrpgot != hrp: return (None, None) decoded = convertbits(data[1:], 5, 8, False) if decoded is None or len(decoded) < 2 or len(decoded) > 40: return (None, None) if data[0] > 16: return (None, None) if data[0] == 0 and len(decoded) != 20 and len(decoded) != 32: return (None, None) if data[0] == 0 and spec != Encoding.BECH32 or data[0] != 0 and spec != Encoding.BECH32M: return (None, None) return (data[0], decoded) def encode(hrp, witver, witprog): """Encode a segwit address.""" spec = Encoding.BECH32 if witver == 0 else Encoding.BECH32M ret = bech32_encode(hrp, [witver] + convertbits(witprog, 8, 5), spec) if decode(hrp, ret) == (None, None): return None return ret electrum_aionostr-0.1.0/src/electrum_aionostr/benchmark.py0000644000175000017500000001134607203224517022624 0ustar useruser""" Run these benchmarks like so: aionostr bench -r ws://localhost:6969 -f events_per_second -c 2 Available benchmarks: events_per_second: measures reading events req_per_second: measures total time from subscribing, reading, to unsubscribing adds_per_second: measures time to add events """ import asyncio import secrets import traceback from time import perf_counter, time from aiohttp import ClientSession import logging from .event import Event from .key import PrivateKey from .relay import Relay, loads, dumps class catchtime: __slots__ = ("start", "count", "duration") def __enter__(self): self.start = perf_counter() self.count = 0 return self def __exit__(self, type, value, traceback): self.duration = perf_counter() - self.start def __add__(self, value): self.count += value return self def throughput(self): return self.count / self.duration def make_events(num_events): private_key = PrivateKey() pubkey = private_key.public_key.hex() prikey = private_key.hex() events = [] expiration = str(int(time()) + 1200) tags = [["t", "benchmark"], ["expiration", expiration]] for i in range(num_events): e = Event(kind=9999, content=secrets.token_hex(6), pubkey=pubkey, tags=tags) e = e.sign(prikey) events.append(e) return events async def adds_per_second(url, num_events=100): events = make_events(num_events) async with ClientSession() as client: relay = Relay(url, client=client) async with relay: with catchtime() as timer: for e in events: await relay.add_event(e, check_response=True) timer += 1 print(f"\tAdd: took {timer.duration:.2f} seconds. {timer.throughput():.1f}/sec") return timer.throughput() async def events_per_second(url, kind=9999, limit=500, duration=20, id=None, **kwargs): query = {"kinds": [kind], "limit": limit} async with ClientSession() as client: async with client.ws_connect(url) as ws: print(f"connected {id}") timer, total_bytes = await asyncio.wait_for(_make_requests(ws, query, limit, duration), timeout=duration+1) bps = (total_bytes / timer.duration) / 1024 print( f"\tEvents: took {timer.duration:.2f} seconds. {timer.throughput():.1f}/sec {bps:.1f}kBps" ) return timer.count, total_bytes async def _make_requests(ws, query, limit, duration): total_bytes = 0 stoptime = perf_counter() + duration query_str = dumps(["REQ", "bench", query]) query_close = dumps(["CLOSE", "bench"]) send = ws.send_str recv = ws.receive_str with catchtime() as timer: while perf_counter() < stoptime: try: await send(query_str) count = 0 while True: event = await recv() total_bytes += len(event) if not event.startswith('["EVENT"'): if count != limit: raise Exception(f"Did not receive full req: {count} {limit}") break count += 1 timer += 1 await send(query_close) except asyncio.exceptions.CancelledError: break except asyncio.exceptions.TimeoutError: print(f"ERROR: task timed out while reading") break except Exception as e: traceback.print_exc() break return timer, total_bytes async def req_per_second(url, kind=9999, limit=50, duration=20, id=0): query = {"kinds": [kind], "limit": limit} client = ClientSession() async with client.ws_connect(url) as ws: print(f"connected {id}") timer, total_bytes = await asyncio.wait_for(_make_requests(ws, query, limit, duration), timeout=duration+1) bps = (total_bytes / timer.duration) / 1024 print( f"\tReq: {timer.count} iterations. {timer.throughput():.1f}/sec {bps:.1f}kBps" ) return timer.count, total_bytes async def runner(concurrency, func, *args, **kwargs): tasks = [] start = perf_counter() for i in range(concurrency): kwargs["id"] = i tasks.append(asyncio.create_task(func(*args, **kwargs))) results = await asyncio.wait(tasks) duration = perf_counter() - start total_count = 0 total_bytes = 0 for r in results[0]: count, received = r.result() total_count += count total_bytes += received total_bps = (total_bytes / duration) / (1024 * 1024) total_throughput = total_count / duration print(f"Total throughput: {total_throughput:.1f}/sec") print(f"Total MBps: {total_bps:.1f}MBps") electrum_aionostr-0.1.0/src/electrum_aionostr/cli.py0000644000175000017500000002124607203224517021441 0ustar useruser"""Console script for aionostr.""" import sys import asyncio import time import datetime import os import logging from functools import wraps import click from . import get_anything, add_event try: import uvloop uvloop.install() except ImportError: pass DEFAULT_RELAYS = os.getenv('NOSTR_RELAYS', 'wss://nos.lol,wss://nostr.mom').split(',') def async_cmd(func): @wraps(func) def wrapper(*args, **kwargs): return asyncio.run(func(*args, **kwargs)) return wrapper @click.group() def main(args=None): """Console script for aionostr.""" # click.echo("Replace this message by putting your code into " # "aionostr.cli.main") # click.echo("See click documentation at https://click.palletsprojects.com/") return 0 @main.command() @click.option('-r', 'relays', help='relay url', multiple=True, default=DEFAULT_RELAYS) @click.option('-s', '--stream', help='stream results', is_flag=True, default=False) @click.option('-v', '--verbose', help='verbose results', is_flag=True, default=False) @click.option('-p', '--pretty', help='pretty print results', is_flag=True, default=False) @click.option('-c', '--content', help='only show content of events', is_flag=True, default=False) @click.option('-q', '--query', help='query json') @click.option('--ids', help='ids') @click.option('--authors', help='authors') @click.option('--kinds', help='kinds') @click.option('--etags', help='etags') @click.option('--ptags', help='ptags') @click.option('--since', help='since') @click.option('--until', help='until') @click.option('--limit', help='limit') @async_cmd async def query(ids, authors, kinds, etags, ptags, since, until, limit, query, relays, stream, verbose, pretty, content): """ Run a query once and print events """ import json if not sys.stdin.isatty(): query = json.loads(sys.stdin.readline()) elif query: query = json.loads(query) else: query = {} if ids: query['ids'] = ids.split(',') if authors: query['authors'] = authors.split(',') if kinds: query['kinds'] = [int(k) for k in kinds.split(',')] if etags: query['#e'] = etags.split(',') if ptags: query['#p'] = ptags.split(',') if since: query['since'] = int(since) if until: query['until'] = int(until) if limit: query['limit'] = int(limit) if not query: click.echo("some type of query is required") return -1 await _get(query, relays, verbose=verbose, stream=stream, pretty=pretty, content=content) async def _get(anything, relays, verbose=False, stream=False, content=False, pretty=False): import json logging.basicConfig(format='%(asctime)s %(name)s %(levelname)s %(message)s', level=logging.DEBUG if verbose else logging.WARNING) response = await get_anything(anything, relays, stream=stream, private_key=os.getenv("NOSTR_KEY")) if isinstance(response, str): click.echo(response) return elif isinstance(response, asyncio.Queue): async def iterator(): while True: event = await response.get() yield event else: async def iterator(): for event in response: yield event async for event in iterator(): if content: click.echo(event.content) elif pretty: click.echo( click.style( json.dumps(event.to_json_object(), indent=4), fg="red" ) ) else: click.echo(event) @main.command() @click.argument("anything") @click.option('-r', 'relays', help='relay url', multiple=True, default=DEFAULT_RELAYS) @click.option('-c', '--content', help='only show content of events', is_flag=True, default=False) @click.option('-p', '--pretty', help='pretty print results', is_flag=True, default=False) @click.option('-v', '--verbose', help='verbose results', is_flag=True, default=False) @async_cmd async def get(anything, relays, verbose, stream=False, content=False, pretty=False): """ Get any nostr event """ await _get(anything, relays, verbose=verbose, stream=stream, content=content, pretty=pretty) @main.command() @click.option('-r', 'relays', help='relay url', multiple=True, default=DEFAULT_RELAYS) @click.option('-v', '--verbose', help='verbose results', is_flag=True, default=False) @click.option('--content', default='', help='content') @click.option('--kind', default=20000, help='kind', type=int) @click.option('--created', default=int(time.time()), type=int, help='created_at') @click.option('--pubkey', default='', help='public key') @click.option('--tags', default='[]', help='tags') @click.option('--private-key', default='', help='private key') @click.option('--dm', default='', help='pubkey to send dm') @async_cmd async def send(content, kind, created, tags, pubkey, relays, private_key, dm, verbose): """ Send an event to the network private key can be set using environment variable NOSTR_KEY """ import json from .util import to_nip19 tags = json.loads(tags) private_key = private_key or os.getenv('NOSTR_KEY', '') if not sys.stdin.isatty(): event = json.loads(sys.stdin.readline()) else: event = None logging.basicConfig(format='%(asctime)s %(name)s %(levelname)s %(message)s', level=logging.DEBUG if verbose else logging.WARNING) event_id = await add_event( relays, event=event, pubkey=pubkey, private_key=private_key, created_at=int(created), kind=kind, content=content, tags=tags, direct_message=dm, ) click.echo(event_id) click.echo(to_nip19('nevent', event_id, relays)) @main.command() @click.argument("anything") @click.option('-r', 'relays', help='relay url', multiple=True, default=DEFAULT_RELAYS) @click.option('-v', '--verbose', help='verbose results', is_flag=True, default=False) @click.option('-t', '--target', help='target relay', required=True) @click.option('--since', help='since', type=int) @async_cmd async def mirror(anything, relays, target, verbose, since): """ Mirror a query from source relays to the target relay """ logging.basicConfig(format='%(asctime)s %(name)s %(levelname)s %(message)s', level=logging.DEBUG if verbose else logging.WARNING) if since: import json anything = json.loads(anything) anything['since'] = since if verbose: click.echo(f'mirroring: {anything} from {relays} to {target}') from . import Manager private_key = os.getenv("NOSTR_KEY") async with Manager([target], private_key=private_key) as man: result_queue = await get_anything(anything, relays=relays, stream=True, private_key=private_key) count = 0 while True: event = await result_queue.get() await man.add_event(event, check_response=True) count += 1 if verbose: click.echo(f'{event.id} from {event.pubkey}') else: if count % 100 == 0: click.echo(f'{count}...{str(datetime.datetime.utcfromtimestamp(event.created_at))}') click.echo(f'{count} sent') @main.command() @click.argument("ntype") @click.argument("obj_id") @click.option('-r', 'relays', help='relay url', multiple=True, default=DEFAULT_RELAYS) def make_nip19(ntype, obj_id, relays): """ Create nip-19 string for given object id """ from .util import to_nip19 obj = to_nip19(ntype, obj_id, relays=relays) click.echo(obj) @main.command() def gen(): """ Generate a private/public key pair """ from .key import PrivateKey from .util import to_nip19 pk = PrivateKey() click.echo(to_nip19('nsec', pk.hex())) click.echo(to_nip19('npub', pk.public_key.hex())) @main.command() @click.option('-r', 'relay', help='relay url', default='ws://127.0.0.1:6969') @click.option('-f', 'function', help='function to run', default="events_per_second") @click.option('-c', 'concurrency', help='concurrency', default=2) @click.option('-s', '--setup/--no-setup', help='add events to setup', is_flag=True, default=True) @async_cmd async def bench(relay, function, concurrency, setup, num_events=1000): from electrum_aionostr import benchmark func = getattr(benchmark, function) args = [relay] if setup: click.echo(f"Adding {num_events} events to setup") await benchmark.adds_per_second(relay, num_events) await asyncio.sleep(1.0) click.echo(f"Running benchmark {function} with concurrency {concurrency}") await benchmark.runner(concurrency, func, *args) if __name__ == "__main__": sys.exit(main()) # pragma: no cover electrum_aionostr-0.1.0/src/electrum_aionostr/crypto_aes.py0000644000175000017500000000723407203224517023043 0ustar useruser# This file is extracted and stripped down # from https://github.com/spesmilo/electrum/blob/d17bb016ef54b6c0a99957cf3d093d0923ac6347/electrum/crypto.py # TODO fix code duplication between repos def versiontuple(v): return tuple(map(int, (v.split(".")))) def assert_bytes(*args): for x in args: assert isinstance(x, (bytes, bytearray)) HAS_CRYPTODOME = False MIN_CRYPTODOME_VERSION = "3.7" try: import Cryptodome if versiontuple(Cryptodome.__version__) < versiontuple(MIN_CRYPTODOME_VERSION): #_logger.warning(f"found module 'Cryptodome' but it is too old: {Cryptodome.__version__}<{MIN_CRYPTODOME_VERSION}") raise Exception() from Cryptodome.Cipher import ChaCha20_Poly1305 as CD_ChaCha20_Poly1305 from Cryptodome.Cipher import ChaCha20 as CD_ChaCha20 from Cryptodome.Cipher import AES as CD_AES except Exception: #_logger.error("missing Cryptodome", exc_info=True) pass else: HAS_CRYPTODOME = True HAS_CRYPTOGRAPHY = False MIN_CRYPTOGRAPHY_VERSION = "2.1" try: import cryptography if versiontuple(cryptography.__version__) < versiontuple(MIN_CRYPTOGRAPHY_VERSION): #_logger.warning(f"found module 'cryptography' but it is too old: {cryptography.__version__}<{MIN_CRYPTOGRAPHY_VERSION}") raise Exception() from cryptography import exceptions from cryptography.hazmat.primitives.ciphers import Cipher as CG_Cipher from cryptography.hazmat.primitives.ciphers import algorithms as CG_algorithms from cryptography.hazmat.primitives.ciphers import modes as CG_modes from cryptography.hazmat.backends import default_backend as CG_default_backend import cryptography.hazmat.primitives.ciphers.aead as CG_aead except Exception: #_logger.error("missing cryptography", exc_info=True) pass else: HAS_CRYPTOGRAPHY = True if not (HAS_CRYPTODOME or HAS_CRYPTOGRAPHY): raise ImportError(f"Error: at least one of ('pycryptodomex', 'cryptography') needs to be installed.") class InvalidPadding(Exception): pass def append_PKCS7_padding(data: bytes) -> bytes: assert_bytes(data) padlen = 16 - (len(data) % 16) return data + bytes([padlen]) * padlen def strip_PKCS7_padding(data: bytes) -> bytes: assert_bytes(data) if len(data) % 16 != 0 or len(data) == 0: raise InvalidPadding("invalid length") padlen = data[-1] if not (0 < padlen <= 16): raise InvalidPadding("invalid padding byte (out of range)") for i in data[-padlen:]: if i != padlen: raise InvalidPadding("invalid padding byte (inconsistent)") return data[0:-padlen] def aes_encrypt_with_iv(key: bytes, iv: bytes, data: bytes) -> bytes: assert_bytes(key, iv, data) data = append_PKCS7_padding(data) if HAS_CRYPTODOME: e = CD_AES.new(key, CD_AES.MODE_CBC, iv).encrypt(data) elif HAS_CRYPTOGRAPHY: cipher = CG_Cipher(CG_algorithms.AES(key), CG_modes.CBC(iv), backend=CG_default_backend()) encryptor = cipher.encryptor() e = encryptor.update(data) + encryptor.finalize() else: raise Exception("no AES backend found") return e def aes_decrypt_with_iv(key: bytes, iv: bytes, data: bytes) -> bytes: assert_bytes(key, iv, data) if HAS_CRYPTODOME: cipher = CD_AES.new(key, CD_AES.MODE_CBC, iv) data = cipher.decrypt(data) elif HAS_CRYPTOGRAPHY: cipher = CG_Cipher(CG_algorithms.AES(key), CG_modes.CBC(iv), backend=CG_default_backend()) decryptor = cipher.decryptor() data = decryptor.update(data) + decryptor.finalize() else: raise Exception("no AES backend found") try: return strip_PKCS7_padding(data) except InvalidPadding: raise electrum_aionostr-0.1.0/src/electrum_aionostr/delegation.py0000644000175000017500000000163707203224517023007 0ustar useruser""" forked from https://github.com/jeffthibault/python-nostr.git """ import time from dataclasses import dataclass from typing import List @dataclass class Delegation: delegator_pubkey: str delegatee_pubkey: str event_kind: int duration_secs: int = 30*24*60 # default to 30 days signature: str = None # set in PrivateKey.sign_delegation @property def expires(self) -> int: return int(time.time()) + self.duration_secs @property def conditions(self) -> str: return f"kind={self.event_kind}&created_at<{self.expires}" @property def delegation_token(self) -> str: return f"nostr:delegation:{self.delegatee_pubkey}:{self.conditions}" def get_tag(self) -> List[str]: """ Called by Event """ return [ "delegation", self.delegator_pubkey, self.conditions, self.signature, ] electrum_aionostr-0.1.0/src/electrum_aionostr/event.py0000644000175000017500000001702407203224517022012 0ustar useruser""" forked from https://github.com/jeffthibault/python-nostr.git """ import copy import dataclasses import time import functools from enum import IntEnum from hashlib import sha256 from typing import Optional import electrum_ecc as ecc from electrum_ecc import ECPrivkey, ECPubkey try: import rapidjson loads = rapidjson.loads dumps = functools.partial(rapidjson.dumps, ensure_ascii=False) except ImportError: import json loads = json.loads dumps = functools.partial(json.dumps, separators=(",", ":"), ensure_ascii=False) class EventKind(IntEnum): SET_METADATA = 0 TEXT_NOTE = 1 RECOMMEND_RELAY = 2 CONTACTS = 3 ENCRYPTED_DIRECT_MESSAGE = 4 DELETE = 5 class InvalidEvent(ValueError): pass @dataclasses.dataclass(frozen=True, kw_only=True, slots=True) class Event: id: Optional[str] = None pubkey: str content: str = "" created_at: int = dataclasses.field(default_factory=lambda: int(time.time())) kind: int = EventKind.TEXT_NOTE tags: list[list[str]] = dataclasses.field(default_factory=list) # supposed to be immutable! sig: Optional[str] = None def __post_init__(self): if not isinstance(self.content, str): raise TypeError("'content' must be a str") if not (isinstance(self.pubkey, str) and len(self.pubkey) == 64): raise TypeError(f"got pubkey with unexpected type or len={len(self.pubkey)}, expected 64 char x-only hex") for inner_list in self.tags: if not all(isinstance(x, str) for x in inner_list): raise TypeError(f"tags must be list[list[str]]: {self.tags=!r}") if not isinstance(self.created_at, int): raise TypeError("'created_at' must be an int") if not isinstance(self.kind, int): raise TypeError("Argument 'kind' must be an int") if not (0 <= self.kind <= 65535): raise ValueError(f"event.kind out of range: {self.kind}") # id # note: we don't validate the original self.id, just always overwrite it computed_id = self.compute_id( pubkey=self.pubkey, created_at=self.created_at, kind=self.kind, tags=self.tags, content=self.content, ) object.__setattr__(self, 'id', computed_id) # sigcheck. # We enforce sig is either None or a valid signature. if self.sig is not None: if not (isinstance(self.sig, str) and len(self.sig) == 128): raise TypeError(f"got sig with unexpected type or len={len(self.sig)}, expected 128 char hex") if not self.verify(): raise InvalidEvent("invalid signature") @property def id_bytes(self): return bytes.fromhex(self.id) @property def is_ephemeral(self): return 20000 <= self.kind < 30000 @property def is_replaceable(self): return (10000 <= self.kind < 20000) or self.kind in (0, 3,) @property def is_parameterized_replaceable(self): return 30000 <= self.kind < 40000 @staticmethod def serialize( *, pubkey: str, created_at: int, kind: int, tags: "list[list[str]]", content: str, ) -> bytes: data = [0, pubkey, created_at, kind, tags, content] data_str = dumps(data) return data_str.encode() @staticmethod def compute_id( *, pubkey: str, created_at: int, kind: int, tags: "list[list[str]]", content: str, ) -> str: return sha256( Event.serialize(pubkey=pubkey, created_at=created_at, kind=kind, tags=tags, content=content) ).hexdigest() def expires_at(self) -> Optional[int]: for tag in self.tags: if len(tag) >= 2 and tag[0] == 'expiration': try: return int(tag[1]) except Exception: continue return None def is_expired(self) -> bool: if (expiration_ts := self.expires_at()) is not None: return expiration_ts < time.time() return False def add_expiration_tag(self, expiration_ts: int) -> "Event": assert self.expires_at() is None, "Duplicate expiration tags" assert expiration_ts >= int(time.time()), f"Expiration is in the past: {expiration_ts=}" tags = copy.deepcopy(self.tags) tags.append(['expiration', str(expiration_ts)]) return dataclasses.replace(self, tags=tags, sig=None) def sign(self, private_key_hex: str) -> "Event": sig = self._sign_event_id(private_key_hex=private_key_hex, event_id=self.id) return dataclasses.replace(self, sig=sig) @classmethod def _sign_event_id(cls, *, private_key_hex: str, event_id: str) -> str: sk = ECPrivkey(bytes.fromhex(private_key_hex)) sig = sk.schnorr_sign(bytes.fromhex(event_id)) return sig.hex() def verify(self) -> bool: if not self.sig: return False try: pub_key = ECPubkey(bytes.fromhex("02" + self.pubkey)) except Exception as e: return False event_id = Event.compute_id( pubkey=self.pubkey, created_at=self.created_at, kind=self.kind, tags=self.tags, content=self.content, ) assert self.id == event_id verified = pub_key.schnorr_verify( bytes.fromhex(self.sig), bytes.fromhex(event_id), ) for tag in self.tags: if tag[0] == "delegation": # verify delegation signature _, delegator, conditions, sig = tag to_sign = ( ":".join(["nostr", "delegation", self.pubkey, conditions]) ).encode("utf8") delegation_verified = ECPubkey(bytes.fromhex("02" + delegator)).schnorr_verify( bytes.fromhex(sig), sha256(to_sign).digest(), ) if not delegation_verified: return False return verified def has_tag(self, tag_name: str, matches: list = None) -> tuple[bool, str]: """ Given a tag name and optional list of matches to find, return (found, match) """ found_tag = False match = None for tag in self.tags: if tag[0] == tag_name: found_tag = True if matches and len(tag) > 1 and tag[1] in matches: match = tag[1] return found_tag, match def to_message(self, sub_id: str = None): message = ["EVENT"] if sub_id: message.append(sub_id) message.append(self.to_json_object()) return dumps(message) def __str__(self): return dumps(self.to_json_object()) def to_json_object(self) -> dict: return { "id": self.id, "pubkey": self.pubkey, "created_at": self.created_at, "kind": self.kind, "tags": self.tags, "content": self.content, "sig": self.sig, } @classmethod def from_json(cls, d: dict, *, verify_sig: bool = True) -> "Event": sig = None if verify_sig: # we just check we were given a sig, the sigcheck itself is in Event.__init__ sig = d.get("sig") if not sig: raise ValueError("missing sig") return Event( pubkey=d["pubkey"], created_at=d["created_at"], kind=d["kind"], tags=d["tags"], content=d["content"], sig=sig, ) electrum_aionostr-0.1.0/src/electrum_aionostr/key.py0000644000175000017500000001042407203224517021456 0ustar useruser""" forked from https://github.com/jeffthibault/python-nostr.git """ import secrets import base64 from hashlib import sha256 import electrum_ecc as ecc from .crypto_aes import aes_encrypt_with_iv, aes_decrypt_with_iv from .delegation import Delegation from .event import Event from . import bech32 class PublicKey: def __init__(self, raw_bytes: bytes) -> None: assert isinstance(raw_bytes, bytes), type(raw_bytes) assert len(raw_bytes) == 32, len(raw_bytes) self.raw_bytes = raw_bytes def bech32(self) -> str: converted_bits = bech32.convertbits(self.raw_bytes, 8, 5) return bech32.bech32_encode("npub", converted_bits, bech32.Encoding.BECH32) def hex(self) -> str: return self.raw_bytes.hex() def verify_signed_message_hash(self, hash: str, sig: str) -> bool: return ecc.ECPubkey(b'\x02' + self.raw_bytes).schnorr_verify( bytes.fromhex(sig), bytes.fromhex(hash) ) @classmethod def from_npub(cls, npub: str): """Load a PublicKey from its bech32/npub form""" hrp, data, spec = bech32.bech32_decode(npub) raw_public_key = bech32.convertbits(data, 5, 8)[:-1] return cls(bytes(raw_public_key)) class PrivateKey: def __init__(self, raw_secret: bytes = None) -> None: if raw_secret is not None: self.raw_secret = raw_secret else: self.raw_secret = secrets.token_bytes(32) sk = ecc.ECPrivkey(self.raw_secret) self.public_key = PublicKey(sk.get_public_key_bytes()[1:]) @classmethod def from_nsec(cls, nsec: str): """Load a PrivateKey from its bech32/nsec form""" hrp, data, spec = bech32.bech32_decode(nsec) raw_secret = bech32.convertbits(data, 5, 8)[:-1] return cls(bytes(raw_secret)) def bech32(self) -> str: converted_bits = bech32.convertbits(self.raw_secret, 8, 5) return bech32.bech32_encode("nsec", converted_bits, bech32.Encoding.BECH32) def hex(self) -> str: return self.raw_secret.hex() def compute_shared_secret(self, public_key_hex: str) -> bytes: privkey = ecc.ECPrivkey(self.raw_secret) pubkey = ecc.ECPubkey(bytes.fromhex("02" + public_key_hex)) pt = pubkey * privkey.secret_scalar return int.to_bytes(pt.x(), length=32, byteorder='big', signed=False) def encrypt_message(self, message: str, public_key_hex: str) -> str: iv = secrets.token_bytes(16) encrypted_message = aes_encrypt_with_iv( key=self.compute_shared_secret(public_key_hex), iv=iv, data=message.encode(), ) return f"{base64.b64encode(encrypted_message).decode()}?iv={base64.b64encode(iv).decode()}" def decrypt_message(self, encoded_message: str, public_key_hex: str) -> str: encoded_data = encoded_message.split("?iv=") encoded_content, encoded_iv = encoded_data[0], encoded_data[1] iv = base64.b64decode(encoded_iv) encrypted_content = base64.b64decode(encoded_content) decrypted_message = aes_decrypt_with_iv( key=self.compute_shared_secret(public_key_hex), iv=iv, data=encrypted_content, ) return decrypted_message.decode() def sign_message_hash(self, hash: bytes) -> str: sk = ecc.ECPrivkey(self.raw_secret) sig = sk.schnorr_sign(hash) return sig.hex() def sign_event(self, event: Event) -> None: event.sig = self.sign_message_hash(bytes.fromhex(event.id)) def sign_delegation(self, delegation: Delegation) -> None: delegation.signature = self.sign_message_hash( sha256(delegation.delegation_token.encode()).digest() ) def __eq__(self, other): return self.raw_secret == other.raw_secret def mine_vanity_key(prefix: str = None, suffix: str = None) -> PrivateKey: if prefix is None and suffix is None: raise ValueError("Expected at least one of 'prefix' or 'suffix' arguments") while True: sk = PrivateKey() if ( prefix is not None and not sk.public_key.bech32()[5 : 5 + len(prefix)] == prefix ): continue if suffix is not None and not sk.public_key.bech32()[-len(suffix) :] == suffix: continue break return sk electrum_aionostr-0.1.0/src/electrum_aionostr/relay.py0000644000175000017500000005252207203224517022007 0ustar useruserimport asyncio import secrets import logging import json from collections import defaultdict, namedtuple from typing import Optional, Iterable, Dict, List, Set, Any, TYPE_CHECKING, AsyncGenerator from dataclasses import dataclass import time from aiohttp import ClientSession, client_exceptions import aiorpcx from .event import Event from .util import normalize_url if TYPE_CHECKING: from logging import Logger from ssl import SSLContext from aiohttp_socks import ProxyConnector from aiohttp import ClientWebSocketResponse # Subscription used inside Relay Subscription = namedtuple('Subscription', ['filters','queue']) # Subscription used inside Manager, @dataclass class ManagerSubscription: output_queue: asyncio.Queue # queue collects all events from all relays filters: tuple[Any, ...] # filters used to subscribe seen_events: Set[bytes] # event ids we have seen monitor: asyncio.Task # monitoring task only_stored: bool class Relay: """ Interact with a relay """ DELAY_INC_MSG_PROCESSING_SLEEP = 0.005 # in seconds def __init__(self, url: str, origin:str = '', private_key:str='', connect_timeout: float=1.0, log=None, ssl_context=None, proxy: Optional['ProxyConnector']=None): self.log = log or logging.getLogger(__name__) self.url = normalize_url(url) self.proxy = proxy self.client = None # type: Optional[ClientSession] self.ws = None # type: Optional[ClientWebSocketResponse] self.receive_task = None # type: Optional[asyncio.Task] self.subscriptions = {} # type: Dict[str, Subscription] self.event_adds = {} # type: dict[str, asyncio.Future[list]] self.notices = asyncio.Queue(maxsize=100) self.private_key = private_key self.origin = origin or url self.connected = False self.connect_timeout = connect_timeout self.ssl_context = ssl_context async def connect(self, taskgroup = None, retries=2): if not self.client: connector_owner = False if self.proxy is not None else True self.client = ClientSession(connector=self.proxy, connector_owner=connector_owner) for i in range(retries): try: self.ws = await asyncio.wait_for( self.client.ws_connect( url=self.url, origin=self.origin, ssl=self.ssl_context ), self.connect_timeout) except Exception as e: self.log.debug(f"Exception on connect: {e!r}") if self.ws: await self.ws.close() await asyncio.sleep(i ** 2) except asyncio.CancelledError: # the Manager might cancel the connection attempt if it takes too long, we still # need to clean up the client await self.client.close() self.client = None raise else: break else: self.log.info(f"Cannot connect to {self.url}") await self.client.close() self.client = None return False if self.receive_task is None and taskgroup: self.receive_task = await taskgroup.spawn(self._receive_messages()) elif self.receive_task is None: self.receive_task = asyncio.create_task(self._receive_messages()) await asyncio.sleep(0.01) self.connected = True self.log.info("Connected to %s", self.url) return True async def reconnect(self): while not await self.connect(taskgroup=None, retries=20): await asyncio.sleep(60*30) for sub_id, sub in self.subscriptions.items(): self.log.debug("resubscribing to %s", sub.filters) await self.send(["REQ", sub_id, *sub.filters]) async def close(self, taskgroup = None): if self.receive_task: self.receive_task.cancel() # fixme: this will cancel taskgroup if self.ws: if taskgroup: await taskgroup.spawn(self.ws.close()) else: await self.ws.close() if self.client: if taskgroup: await taskgroup.spawn(self.client.close()) else: await self.client.close() self.connected = False async def _receive_messages(self): while True: # sleep a bit between each message, to mitigate CPU-DOS (verifying signatures is expensive): await asyncio.sleep(self.DELAY_INC_MSG_PROCESSING_SLEEP) try: message = await self.ws.receive_str() if len(message) > 64000: self.log.debug(f"got too long message from {self.url=}: {len(message)=}") continue # not storing or handling msg > this limit message = json.loads(message) self.log.debug(message) # FIXME spammy (or at least log which relay it's coming from) if message[0] == 'EVENT': sub_id = message[1] sub = self.subscriptions[sub_id] # can raise KeyError for unknown sub_id # note: - Event.from_json will do basic validation, and sigcheck. # - The sigcheck is expensive -- we could perhaps pre-calc the event_id, # store a per-relay per-sub "seen" event_id set, and discard duplicates. # To make it harder for malicious relay to CPU-DOS us. event = Event.from_json(message[2]) # TODO validate if event is actually related to sub? by matching sub.filters await sub.queue.put(event) elif message[0] == 'EOSE': sub_id = message[1] sub = self.subscriptions[sub_id] # can raise KeyError for unknown sub_id await sub.queue.put(None) elif message[0] == 'OK': if message[1] in self.event_adds: self.event_adds[message[1]].set_result(message) elif message[0] == 'NOTICE': if self.notices.full(): self.notices.get_nowait() # remove the oldest notice to store new one self.notices.put_nowait(message[1]) elif message[0] == 'AUTH': await self.authenticate(message[1]) else: self.log.debug(f"Unknown message from relay {self.url}: {str(message)}") except (IndexError, KeyError): await asyncio.sleep(0.1) continue except asyncio.CancelledError: return except client_exceptions.WSMessageTypeError: # raised by ws.receive_str when connection is closed await self.reconnect() except Exception as e: self.log.exception("") await asyncio.sleep(5) async def send(self, message): try: await self.ws.send_str(json.dumps(message)) except client_exceptions.ClientConnectionError: await self.reconnect() await self.ws.send_str(json.dumps(message)) async def add_event(self, event, check_response=False): if isinstance(event, Event): event = event.to_json_object() event_id = event['id'] if check_response: self.event_adds[event_id] = asyncio.Future() await self.send(["EVENT", event]) if check_response: try: response = await self.event_adds[event_id] finally: del self.event_adds[event_id] return response[1] return None async def subscribe(self, taskgroup, sub_id: str, *filters, queue=None): self.subscriptions[sub_id] = Subscription(filters=filters, queue=queue or asyncio.Queue(maxsize=50)) await taskgroup.spawn(self.send(["REQ", sub_id, *filters])) return self.subscriptions[sub_id].queue async def unsubscribe(self, sub_id: str) -> None: await self.send(["CLOSE", sub_id]) self.subscriptions.pop(sub_id, None) async def authenticate(self, challenge:str): if not self.private_key: import warnings warnings.warn("private key required to authenticate") return from .key import PrivateKey if self.private_key.startswith('nsec'): from .util import from_nip19 pk = from_nip19(self.private_key)['object'] else: pk = PrivateKey(bytes.fromhex(self.private_key)) auth_event = Event( kind=22242, pubkey=pk.public_key.hex(), tags=[ ['challenge', challenge], ['relay', self.url] ] ) auth_event = auth_event.sign(pk.hex()) await self.send(["AUTH", auth_event.to_json_object()]) await asyncio.sleep(0.1) return True async def __aenter__(self): await self.connect() return self async def __aexit__(self, ex_type, ex, tb): await self.close() class Manager: """ Manage a collection of relays """ # time after which we assume a relay won't send us any more messages for a requested filter EOSE_TIMEOUT_SEC = 60 def __init__(self, relays: Optional[Iterable[str]] = None, origin: Optional[str] = 'aionostr', private_key: Optional[str] = None, log: Optional['Logger'] = None, ssl_context: Optional['SSLContext'] = None, proxy: Optional['ProxyConnector'] = None, connect_timeout: Optional[int] = None): self.log = log or logging.getLogger(__name__) self._proxy = proxy self._connect_timeout = connect_timeout if connect_timeout else 5 if not proxy else 10 self._ssl_context = ssl_context self._private_key = private_key self._origin = origin self.relays = [Relay( r, origin=origin, private_key=private_key, log=log, ssl_context=ssl_context, proxy=proxy, connect_timeout=self._connect_timeout) for r in set([normalize_url(url) for url in relays] if relays else [])] self.subscriptions = {} # type: Dict[str, ManagerSubscription] self._subscription_lock = asyncio.Lock() self.connected = False self._connectlock = asyncio.Lock() self.taskgroup = aiorpcx.TaskGroup() @property def private_key(self): return self._private_key @private_key.setter def private_key(self, pk): for relay in self.relays: relay.private_key = pk def add(self, url, **kwargs): self.relays.append(Relay(url, **kwargs)) @staticmethod async def monitor_queues( queues, output: asyncio.Queue[Optional[Event]], seen: Set[bytes], only_stored: bool, ): async def func(queue): while True: result = await queue.get() if result: eid = result.id_bytes if eid not in seen: seen.add(eid) await output.put(result) else: if only_stored: # EOSE message # put none back on queue in case we update relays during this query, so the # next monitoring task for this relay will return again here instead of waiting # for another EOSE await queue.put(None) return tasks = [func(queue) for queue in queues] try: await asyncio.gather(*tasks) except asyncio.CancelledError: # don't shut down the output queue, we just want to update the relays return # if all tasks naturally returned (not cancelled) we got an EOSE of each relay (only_stored). await output.put(None) assert only_stored async def broadcast(self, relays, func, *args, **kwargs): """ returns when all tasks completed. timeout is enforced """ results = [] for relay in relays: coro = asyncio.wait_for(getattr(relay, func)(*args, **kwargs), timeout=self._connect_timeout) results.append(await self.taskgroup.spawn(coro)) if not results: return self.log.debug("Waiting for %s", func) done, pending = await asyncio.wait(results, return_when=asyncio.ALL_COMPLETED) for task in done: try: task.result() except asyncio.TimeoutError: pass except Exception: self.log.exception("Exception in broadcast task") return done, pending async def connect(self): async with self._connectlock: if not self.connected: await self.broadcast(self.relays, 'connect', self.taskgroup) self.connected = True tried = len(self.relays) connected = [relay for relay in self.relays if relay.connected] success = len(connected) self.relays = connected self.log.info("Connected to %d out of %d relays", success, tried) async def close(self): await self.broadcast(self.relays, 'close', self.taskgroup) await self.taskgroup.cancel_remaining() self.connected = False if self._proxy: await self._proxy.close() self._proxy = None async def add_event(self, event): """ waits until one of the tasks succeeds, or raises timeout""" queue = asyncio.Queue() async def _add_event(relay): try: result = await relay.add_event(event, check_response=True) except Exception as e: self.log.info(f'add_event: failed with {relay.url}') return await queue.put(result) for relay in self.relays: await self.taskgroup.spawn(_add_event(relay)) result = await asyncio.wait_for(queue.get(), timeout=self._connect_timeout) return result async def subscribe(self, sub_id: str, only_stored: bool, *filters) -> asyncio.Queue[Optional[Event]]: """Apply the given filter to all relays and return a queue that collects incoming events""" relay_queues = [] async with self._subscription_lock: for relay in self.relays: if sub_id not in relay.subscriptions: relay_queues.append(await relay.subscribe(self.taskgroup, sub_id, *filters)) else: # relay is already subscribed to this sub_id relay_queues.append(relay.subscriptions[sub_id].queue) if sub_id not in self.subscriptions: # create new output queue output_queue = asyncio.Queue() seen_events = set() subscription = ManagerSubscription( monitor=await self.taskgroup.spawn( self.monitor_queues( relay_queues, output_queue, seen_events, only_stored, ) ), filters=filters, output_queue=output_queue, seen_events=seen_events, only_stored=only_stored, ) self.subscriptions[sub_id] = subscription else: # update existing subscription subscription = self.subscriptions[sub_id] subscription.monitor.cancel() # stop the old monitoring task output_queue = subscription.output_queue subscription.monitor = await self.taskgroup.spawn( # start a new monitoring task self.monitor_queues( relay_queues, output_queue, subscription.seen_events, subscription.only_stored, ) ) return output_queue async def unsubscribe(self, sub_id: str): async with self._subscription_lock: await self.broadcast(self.relays, 'unsubscribe', sub_id) if sub_id in self.subscriptions: self.subscriptions[sub_id].monitor.cancel() self.subscriptions.pop(sub_id, None) async def update_relays(self, updated_relay_list: Iterable[str]) -> None: """Dynamically update the relays of an existing Manager instance""" if not self.connected: raise NotInitialized("Manager is not connected") changes: bool = False updated_relay_list: Set[str] = set(normalize_url(url) for url in updated_relay_list) self.log.debug(f"Updating relays, new list: {updated_relay_list}" ) # add relays that are not already connected new_relays = [] for relay_url in updated_relay_list: if relay_url in [relay.url for relay in self.relays]: continue new_relay = Relay( relay_url, origin=self._origin, private_key=self._private_key, log=self.log, ssl_context=self._ssl_context, proxy=self._proxy, connect_timeout=self._connect_timeout) new_relays.append(new_relay) if new_relays: changes = True async with self._connectlock: await self.broadcast(new_relays, 'connect', self.taskgroup) connected_relays = [relay for relay in new_relays if relay.connected] self.relays.extend(connected_relays) self.log.info("Connected to %d out of %d new relays", len(connected_relays), len(new_relays)) # remove relays that are no longer in the updated list remove_relays: List[Relay] = [] for relay in self.relays: if relay.url not in updated_relay_list: remove_relays.append(relay) if remove_relays: changes = True async with self._connectlock: await self.broadcast(remove_relays, 'close', self.taskgroup) self.relays = [relay for relay in self.relays if relay not in remove_relays] self.log.info("Removed %d relays", len(remove_relays)) # refresh subscriptions if changes: for sub_id, subscription in self.subscriptions.items(): await self.subscribe(sub_id, subscription.only_stored, *subscription.filters) async def __aenter__(self): await self.taskgroup.__aenter__() await self.connect() return self async def __aexit__(self, ex_type, ex, tb): await self.close() await self.taskgroup.__aexit__(ex_type, ex, tb) async def get_events( self, *filters: dict[str, Any], only_stored: bool = True, single_event: bool = False, filter_future_events_sec: Optional[int] = 3600, ) -> AsyncGenerator[Event, None]: """ Request events matching *filters from our connected relays. *filters: dicts representing the json in NIP-01 https://github.com/nostr-protocol/nips/blob/master/01.md#communication-between-clients-and-relays only_stored: stops the subscription after the relays have sent all events they currently know of and will not keep waiting for future events. """ sub_id = secrets.token_hex(4) queue = await self.subscribe(sub_id, only_stored, *filters) try: while True: # if only_stored is False we will wait forever on new events as we are also interested # in receiving future events. If only_stored is True we will either wait until we # got an EOSE from each relay (None) or until timeout. event: Optional[Event] = await asyncio.wait_for( queue.get(), timeout=self.EOSE_TIMEOUT_SEC if only_stored else None, ) if event is None: self.log.debug(f"received all stored events (EOSE).") return # validate event: sigcheck already done in Event.__init__ assert event.sig is not None # validate event: timestamp should not be in the future if filter_future_events_sec is not None: if event.created_at > time.time() + filter_future_events_sec: self.log.debug(f"event {event.id} too far into future") continue yield event if single_event: break except asyncio.TimeoutError: self.log.debug(f"received all stored events (timeout).") finally: # always clean up the subscription when exiting this context. # the 'yield' raises GeneratorExit when this generator gets garbage collected after the # consumer leaves it. https://peps.python.org/pep-0342/#specification-summary await self.unsubscribe(sub_id) self.log.debug(f"subscription {sub_id} closed") class NotInitialized(Exception): pass electrum_aionostr-0.1.0/src/electrum_aionostr/util.py0000644000175000017500000000661707203224517021654 0ustar useruserfrom .key import PublicKey, PrivateKey, bech32 NIP19_PREFIXES = ('npub', 'nsec', 'note', 'nprofile', 'nevent', 'nrelay', 'nostr:', 'naddr') def from_nip19(nip19string: str): """ Decode nip-19 formatted string into: private key, public key, event id or profile public key """ hrp, data, spec = bech32.bech32_decode(nip19string) data = bech32.convertbits(data, 5, 8) retval = { 'object': None, 'type': hrp, 'relays': None, } if hrp == 'npub': retval['object'] = PublicKey(bytes(data[:-1])) elif hrp == 'nsec': retval['object'] = PrivateKey(bytes(data[:-1])) elif hrp == 'note': retval['object'] = bytes(data[:-1]).hex() elif hrp in ('nevent', 'nprofile', 'nrelay', 'naddr'): tlv = {0: [], 1: [], 2: [], 3: []} while data: t = data[0] try: l = data[1] except IndexError: break v = data[2:2+l] data = data[2+l:] if not v: continue tlv[t].append(v) if tlv[0]: if hrp not in ('nrelay', 'naddr'): key_or_id = bytes(tlv[0][0]).hex() else: key_or_id = bytes(tlv[0][0]).decode() else: key_or_id = '' relays = [] for relay in tlv[1]: relays.append(bytes(relay).decode('utf8')) if tlv[2]: retval['author'] = bytes(tlv[2][0]).hex() if tlv[3]: retval['kind'] = int.from_bytes(bytes(tlv[3][0]), 'big') retval['object'] = key_or_id retval['relays'] = relays return retval def to_nip19(ntype: str, payload: str, relays=None, author=None, kind=None): """ Encode object as nip-19 compatible string """ if ntype in ('npub', 'nsec', 'note'): data = bytes.fromhex(payload) elif ntype in ('nprofile', 'nevent', 'nrelay', 'naddr'): data = bytearray() if ntype == 'nrelay': encoded = payload.encode() data.append(0) data.append(len(encoded)) data.extend(encoded) elif ntype == 'naddr': encoded = payload.encode() data.append(0) data.append(len(encoded)) data.extend(encoded) if author: author_encoded = bytes.fromhex(author) data.append(2) data.append(len(author_encoded)) data.extend(author_encoded) if kind: kind_bytes = kind.to_bytes(4, 'big') data.append(3) data.append(len(kind_bytes)) data.extend(kind_bytes) else: # payload is event id event_id = bytes.fromhex(payload) data.append(0) data.append(len(event_id)) data.extend(event_id) if relays: for r in relays: r = r.encode() data.append(1) data.append(len(r)) data.extend(r) else: data = payload.encode() converted_bits = bech32.convertbits(data, 8, 5) return bech32.bech32_encode(ntype, converted_bits, bech32.Encoding.BECH32) def normalize_url(url: str) -> str: stripped_url = url.strip().rstrip('/').lower() if not stripped_url.startswith(('ws://', 'wss://')): stripped_url = 'wss://' + stripped_url return stripped_url electrum_aionostr-0.1.0/src/electrum_aionostr.egg-info/0000755000175000017500000000000007203224517022005 5ustar useruserelectrum_aionostr-0.1.0/src/electrum_aionostr.egg-info/PKG-INFO0000644000175000017500000000545607203224517023114 0ustar useruserMetadata-Version: 2.4 Name: electrum-aionostr Version: 0.1.0 Summary: asyncio nostr client Author: The Electrum developers License-Expression: BSD-3-Clause Project-URL: Homepage, https://github.com/spesmilo/electrum-aionostr Project-URL: Repository, https://github.com/spesmilo/electrum-aionostr Keywords: nostr,asyncio Classifier: Development Status :: 2 - Pre-Alpha Classifier: Intended Audience :: Developers Classifier: Natural Language :: English Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Requires-Python: >=3.10 Description-Content-Type: text/markdown License-File: LICENSE Requires-Dist: electrum_ecc Requires-Dist: aiohttp<4.0.0,>=3.11.0 Requires-Dist: aiohttp_socks>=0.9.2 Requires-Dist: aiorpcx<0.26,>=0.22.0 Provides-Extra: crypto Requires-Dist: cryptography>=2.8; extra == "crypto" Provides-Extra: tests Requires-Dist: pytest-cov; extra == "tests" Requires-Dist: Click>=8.2; extra == "tests" Provides-Extra: cli Requires-Dist: Click; extra == "cli" Dynamic: license-file # electrum-aionostr asyncio nostr client ``` Free software: BSD license Original Author: Dave St.Germain Fork Author/Maintainer: The Electrum Developers Language: Python (>= 3.10) ``` [![Latest PyPI package](https://badge.fury.io/py/electrum-aionostr.svg)](https://pypi.org/project/electrum-aionostr/) [![Build Status](https://api.cirrus-ci.com/github/spesmilo/electrum-aionostr.svg)](https://cirrus-ci.com/github/spesmilo/electrum-aionostr) This is a fork of [aionostr](https://github.com/davestgermain/aionostr) that does not require Coincurve. ## Getting started ``` $ python3 -m pip install --user ".[crypto]" ``` ## Features * Retrieve anything from the nostr network, using one command: ``` $ aionostr get nprofile1qqsv0knzz56gtm8mrdjhjtreecl7dl8xa47caafkevfp67svwvhf9hcpz3mhxue69uhkgetnvd5x7mmvd9hxwtn4wvspak3h $ aionostr get -v nevent1qqsxpnzhw2ddf2uplsxgc5ctr9h6t65qaalzvzf0hvljwrz8q64637spp3mhxue69uhkyunz9e5k75j6gxm $ aionostr query -s -q '{"kinds": [1], "limit":10}' $ aionostr send --kind 1 --content test --private-key $ aionostr mirror -r wss://source.relay -t wss://target.relay --verbose '{"kinds": [4]}' ``` Set environment variables: ``` NOSTR_RELAYS=wss://brb.io,wss://nostr.mom NOSTR_KEY=`aionostr gen | head -1` ``` ### Maintainer notes Release checklist: - bump `__version__` in `__init__.py` - write changelog in [`docs/history.md`](docs/history.md) - `$ git tag -s $VERSION -m "$VERSION"` - `$ git push "$REMOTE_ORIGIN" tag "$VERSION"` - build sdist (see [`contrib/sdist/`](contrib/sdist)): - `$ ELECBUILD_COMMIT=HEAD ELECBUILD_NOCACHE=1 ./contrib/sdist/build.sh` - `$ python3 -m twine upload dist/$DISTNAME` electrum_aionostr-0.1.0/src/electrum_aionostr.egg-info/SOURCES.txt0000644000175000017500000000130607203224517023671 0ustar useruserLICENSE README.md pyproject.toml src/electrum_aionostr/__init__.py src/electrum_aionostr/bech32.py src/electrum_aionostr/benchmark.py src/electrum_aionostr/cli.py src/electrum_aionostr/crypto_aes.py src/electrum_aionostr/delegation.py src/electrum_aionostr/event.py src/electrum_aionostr/key.py src/electrum_aionostr/relay.py src/electrum_aionostr/util.py src/electrum_aionostr.egg-info/PKG-INFO src/electrum_aionostr.egg-info/SOURCES.txt src/electrum_aionostr.egg-info/dependency_links.txt src/electrum_aionostr.egg-info/entry_points.txt src/electrum_aionostr.egg-info/requires.txt src/electrum_aionostr.egg-info/top_level.txt tests/test_aionostr.py tests/test_event.py tests/test_key.py tests/test_manager.pyelectrum_aionostr-0.1.0/src/electrum_aionostr.egg-info/dependency_links.txt0000644000175000017500000000000107203224517026053 0ustar useruser electrum_aionostr-0.1.0/src/electrum_aionostr.egg-info/entry_points.txt0000644000175000017500000000007007203224517025300 0ustar useruser[console_scripts] aionostr = electrum_aionostr.cli:main electrum_aionostr-0.1.0/src/electrum_aionostr.egg-info/requires.txt0000644000175000017500000000022707203224517024406 0ustar useruserelectrum_ecc aiohttp<4.0.0,>=3.11.0 aiohttp_socks>=0.9.2 aiorpcx<0.26,>=0.22.0 [cli] Click [crypto] cryptography>=2.8 [tests] pytest-cov Click>=8.2 electrum_aionostr-0.1.0/src/electrum_aionostr.egg-info/top_level.txt0000644000175000017500000000002207203224517024531 0ustar useruserelectrum_aionostr electrum_aionostr-0.1.0/tests/0000755000175000017500000000000007203224517015130 5ustar useruserelectrum_aionostr-0.1.0/tests/test_aionostr.py0000644000175000017500000000102607203224517020376 0ustar useruserfrom click.testing import CliRunner from electrum_aionostr import cli def test_command_line_interface(): """Test the CLI.""" runner = CliRunner() result = runner.invoke(cli.main) assert result.exit_code == 2 # no_args_is_help will result in 2, since https://github.com/pallets/click/pull/1489 assert 'Console script for aionostr' in result.output help_result = runner.invoke(cli.main, ['--help']) assert help_result.exit_code == 0 assert '--help Show this message and exit.' in help_result.output electrum_aionostr-0.1.0/tests/test_event.py0000644000175000017500000000504707203224517017670 0ustar useruserimport dataclasses import unittest import os import time from electrum_aionostr.event import Event, InvalidEvent from electrum_aionostr.key import PrivateKey class TestEvent(unittest.TestCase): def test_verify(self): privkey1 = PrivateKey(os.urandom(32)) privkey2 = PrivateKey(os.urandom(32)) unsigned_event = Event( pubkey=privkey1.public_key.hex(), content="test" ) # verify event without signature self.assertFalse(unsigned_event.verify()) # verify event with correct signature event = unsigned_event.sign(privkey1.hex()) self.assertTrue(event.verify()) # Event with incorrect signature cannot even be created: with self.assertRaises(InvalidEvent): event = unsigned_event.sign(privkey2.hex()) def test_expiration(self): privkey1 = PrivateKey(os.urandom(32)) event = Event( pubkey=privkey1.public_key.hex(), content="test" ) # Test event with no expiration tag self.assertFalse(event.is_expired()) # Test event with expiration tag set in the future future_time = int(time.time()) + 3600 assert event.tags == [] event = event.add_expiration_tag(future_time) self.assertFalse(event.is_expired()) # Test event with expiration tag set in the past event = dataclasses.replace(event, tags=[["expiration", str(int(time.time()) - 3600)]]) self.assertTrue(event.is_expired()) # Test event with expiration tag set during initialization future_time = int(time.time()) + 999999 event_with_expiration = Event( pubkey=privkey1.public_key.hex(), content="test", tags=[["expiration", str(future_time)]], ) self.assertFalse(event_with_expiration.is_expired()) self.assertEqual(future_time, event_with_expiration.expires_at()) # test expired event with multiple tags expiration_time = int(time.time()) tags = [] tags.append(["test", "test"]) tags.append(["test"]) tags.append(["test", "21312", "test"]) tags.append(["expiration", str(expiration_time)]) tags.append(["test", "test"]) tags.append(["test"]) tags.append(["test", "21312", "test"]) event = Event( pubkey=privkey1.public_key.hex(), content="test", tags=tags, ) self.assertTrue(event.is_expired()) self.assertEqual(event.expires_at(), expiration_time) electrum_aionostr-0.1.0/tests/test_key.py0000644000175000017500000000464707203224517017344 0ustar useruserfrom hashlib import sha256 import unittest from electrum_aionostr.key import PrivateKey, PublicKey bfh = bytes.fromhex class TestKey(unittest.TestCase): def test_basics(self): nsec = 'nsec1yc7ftz6k59mwnnl2chvh7lth9sz208tl8mygn29t98dcg5dg8avsqg7xh4' npub = 'npub1aq9rl4v66ch3xxrv9n4gunlvqjxgnpqsyhvath4ee3d2z9k55t8s8z6dnu' secret_bytes = bfh("263c958b56a176e9cfeac5d97f7d772c04a79d7f3ec889a8ab29db8451a83f59") pubkey_bytes = bfh("e80a3fd59ad62f13186c2cea8e4fec048c89841025d9d5deb9cc5aa116d4a2cf") privkey1 = PrivateKey.from_nsec(nsec) privkey2 = PrivateKey(secret_bytes) self.assertEqual(secret_bytes, privkey1.raw_secret) self.assertEqual(secret_bytes, privkey2.raw_secret) self.assertEqual(secret_bytes.hex(), privkey1.hex()) pubkey1 = privkey1.public_key pubkey2 = PublicKey(pubkey_bytes) pubkey3 = PublicKey.from_npub(npub) self.assertEqual(npub, pubkey1.bech32()) self.assertEqual(npub, pubkey2.bech32()) self.assertEqual(npub, pubkey3.bech32()) self.assertEqual(pubkey_bytes.hex(), pubkey1.raw_bytes.hex()) self.assertEqual(pubkey_bytes.hex(), pubkey1.hex()) self.assertEqual(pubkey_bytes.hex(), pubkey2.hex()) self.assertEqual(pubkey_bytes.hex(), pubkey3.hex()) def test_sign_message_hash(self): secret_bytes = bfh("263c958b56a176e9cfeac5d97f7d772c04a79d7f3ec889a8ab29db8451a83f59") privkey = PrivateKey(secret_bytes) msg_hash = sha256(b"hello there").digest() sig_hex = privkey.sign_message_hash(msg_hash) pubkey = privkey.public_key self.assertTrue(pubkey.verify_signed_message_hash(msg_hash.hex(), sig_hex)) self.assertFalse(pubkey.verify_signed_message_hash(msg_hash.hex(), bytes(64).hex())) self.assertFalse(pubkey.verify_signed_message_hash(msg_hash.hex(), bytes(range(64)).hex())) def test_encrypt_message(self): privkey1 = PrivateKey(bfh("263c958b56a176e9cfeac5d97f7d772c04a79d7f3ec889a8ab29db8451a83f59")) privkey2 = PrivateKey(bfh("80a20c6f606010d4e259cae4c0231bab26da25f7e5497bb21a9d48298d0603da")) msg1 = "hello there" ciphertext = privkey1.encrypt_message(msg1, privkey2.public_key.hex()) self.assertEqual(msg1, privkey2.decrypt_message(ciphertext, privkey1.public_key.hex())) self.assertEqual(msg1, privkey1.decrypt_message(ciphertext, privkey2.public_key.hex())) electrum_aionostr-0.1.0/tests/test_manager.py0000644000175000017500000002627307203224517020165 0ustar useruserimport unittest import os import asyncio from unittest.mock import patch from logging import getLogger import json from electrum_aionostr.relay import Manager, Relay from electrum_aionostr.key import PrivateKey from electrum_aionostr.event import Event _logger = getLogger(__name__) _logger.setLevel('DEBUG') def get_random_dummy_event() -> Event: privkey = PrivateKey(os.urandom(32)) event = Event( pubkey=privkey.public_key.hex(), content="test" ) event = event.sign(privkey.hex()) return event class DummyWebsocket: def __init__(self): self.incoming_messages = asyncio.Queue() # data we receive from the relay self.outgoing_messages = asyncio.Queue() # data we send to the relay async def receive_str(self): msg = await self.incoming_messages.get() _logger.debug(f"DummyWebsocket received message") return msg async def send_str(self, message: str): await self.outgoing_messages.put(message) _logger.debug(f"DummyWebsocket sent message") async def close(self): _logger.debug(f"DummyWebsocket closed") class DummyClientSession: def __init__(self): self.dummy_websocket = DummyWebsocket() async def ws_connect(self, url, origin, ssl): _logger.debug(f"DummyClientSession ws connected") return self.dummy_websocket async def close(self): _logger.debug(f"DummyClientSession closed") class DummyRelay(Relay): """Relay without network connections to test the relay manager""" def __init__( self, url: str, origin:str = '', private_key:str='', connect_timeout: float=1.0, log=None, ssl_context=None, proxy=None, ): Relay.__init__(self, url, origin, private_key, connect_timeout, log, ssl_context, proxy) # this will make Relay.connect() use the DummyClientSession instead of an aiohttp ClientSession self.client = DummyClientSession() def receive_data_from_relay(self, data): # put the data on the dummy websocket so the Relay instance treats data as it is appearing # on the websocket connection to the connected relay self.client.dummy_websocket.incoming_messages.put_nowait(data) class TestManager(unittest.IsolatedAsyncioTestCase): async def test_monitor_queues_event_deduplication(self): """ Tests if the events returned by multiple relays are properly deduplicated. """ output_queue = asyncio.Queue() # this is what the consumer of the subscription will receive input_queues = [asyncio.Queue() for _ in range(10)] # these are the relays dummy_events = [get_random_dummy_event() for _ in range(20)] for queue in input_queues: for dummy_event in dummy_events: queue.put_nowait(dummy_event) queue.put_nowait(None) # EOSE # Create a patched version of Queue.put that adds a delay to force context # switching as it happens with regular usage of monitor_queues original_put = asyncio.Queue.put async def slow_put(self, item): await asyncio.sleep(0.01) await original_put(self, item) with patch('asyncio.Queue.put', slow_put): monitoring_task = asyncio.create_task(Manager.monitor_queues( input_queues, output_queue, set(), True, )) # check if the output queue returns some events twice event_ids = set() while True: event = await asyncio.wait_for(output_queue.get(), timeout=10) if event is None: assert len(event_ids) == len(dummy_events) break assert event.id not in event_ids event_ids.add(event.id) monitoring_task.cancel() async def test_manager_deduplicates_relays(self): """ Relay manager should deduplicate relay urls so it doesn't try to open multiple connections to the same relay if it gets passed slightly different URLS. This is important as we often have to open connections on-demand with urls parsed from Nostr event tags which maybe are slightly different to our own config urls. """ relay_urls = [ "wss://test.com/", "wss://test.com/", "wss://test.com", "wss://TEST.COM", "wSS://test.com", "wss://TEST.com", "test.com", "TEST.COM", ] manager = Manager( relays=relay_urls, ) self.assertEqual(len(manager.relays), 1, msg=[r.url for r in manager.relays]) self.assertEqual(manager.relays[0].url, "wss://test.com") async def test_subscription_gets_closed_on_return(self): """Test that get_events properly unsubscribes when exiting its AsyncGenerator""" private_key = os.urandom(32) with patch('electrum_aionostr.relay.Relay', DummyRelay): manager = Manager( relays=[f"wss://dummy{i}.relay" for i in range(10)], private_key=private_key.hex(), log=_logger, ) await manager.connect() self.assertTrue(manager.connected) received_any_event = asyncio.Future() async def get_some_events(): query = {'kinds': [1]} async for event in manager.get_events(query, only_stored=False, single_event=False): received_any_event.set_result(event) # return after we received any event, the subscription should get closed return event_task = asyncio.create_task(get_some_events()) while len(manager.subscriptions) < 1: # wait until task creates subscription await asyncio.sleep(0.01) self.assertEqual(len(manager.subscriptions), 1, msg="manger should have exactly one subscription") subscription_id = next(iter(manager.subscriptions.keys())) # now let the relays send us some events for this subscription for i in range(5): relay_message = json.dumps(['EVENT', subscription_id, get_random_dummy_event().to_json_object()]) for dummy_relay in manager.relays: dummy_relay.receive_data_from_relay(relay_message) await asyncio.wait_for(received_any_event, timeout=0.5) # now the subscription task returned, leaving the async generator. the subscription should # get closed and cleaned up async def wait_for_cleanup(): while subscription_id in manager.subscriptions: await asyncio.sleep(0.01) await asyncio.wait_for(wait_for_cleanup(), timeout=0.5) self.assertTrue(event_task.done()) async def test_subscription_returns_event_stored_only(self): """ Test that we don't immediately close the subscription if only_stored=True and any relay returns EOSE (End of stored events) before another relay got the chance to send us the event we requested. """ private_key = os.urandom(32) with patch('electrum_aionostr.relay.Relay', DummyRelay): manager = Manager( relays=[f"wss://dummy{i}.relay" for i in range(10)], private_key=private_key.hex(), log=_logger, ) await manager.connect() self.assertTrue(manager.connected) async def get_event(): query = {'kinds': [1]} got_event = None async for event in manager.get_events(query, only_stored=True, single_event=False): got_event = event self.assertIsNotNone(got_event, msg="Subscription didn't return any event") event_task = asyncio.create_task(get_event()) while len(manager.subscriptions) < 1: # wait until task creates subscription await asyncio.sleep(0.01) self.assertEqual(len(manager.subscriptions), 1, msg="manger should have exactly one subscription") # all relays except the last one report they don't have any event stored subscription_id = next(iter(manager.subscriptions.keys())) eose_message = json.dumps(['EOSE', subscription_id]) for dummy_relay in manager.relays[:-1]: dummy_relay.receive_data_from_relay(eose_message) # the first relay even sends multiple EOSE to us for _ in range(10): manager.relays[0].receive_data_from_relay(eose_message) # the last relay will send one event and then EOSE last_relay = manager.relays[-1] event_message = json.dumps(['EVENT', subscription_id, get_random_dummy_event().to_json_object()]) last_relay.receive_data_from_relay(event_message) last_relay.receive_data_from_relay(eose_message) # the event task should return once it got the event as we set only_stored True await asyncio.wait_for(event_task, timeout=1) event_task.result() async def test_subscription_doesnt_get_closed(self): """ Test that a subscription for future events (only_stored=False) doesn't get closed if all relays send EOSE. """ private_key = os.urandom(32) with patch('electrum_aionostr.relay.Relay', DummyRelay): manager = Manager( relays=[f"wss://dummy{i}.relay" for i in range(10)], private_key=private_key.hex(), log=_logger, ) await manager.connect() self.assertTrue(manager.connected) any_event = asyncio.Future() async def get_event(): query = {'kinds': [1]} async for event in manager.get_events(query, only_stored=False, single_event=False): any_event.set_result(event) self.assertTrue(False, msg="Subscription stopped") event_task = asyncio.create_task(get_event()) while len(manager.subscriptions) < 1: # wait until task creates subscription await asyncio.sleep(0.01) self.assertEqual(len(manager.subscriptions), 1, msg="manger should have exactly one subscription") # all relays send EOSE, but the subscription should stay open subscription_id = next(iter(manager.subscriptions.keys())) eose_message = json.dumps(['EOSE', subscription_id]) for dummy_relay in manager.relays: dummy_relay.receive_data_from_relay(eose_message) # check that the task is still running and that the subscription didn't return anything await asyncio.sleep(0.1) self.assertFalse(event_task.done(), msg="Subscription task stopped") self.assertFalse(any_event.done()) # now send one event to a single relay, it should be set in the future relay = manager.relays[0] dummy_event = get_random_dummy_event().to_json_object() event_message = json.dumps(['EVENT', subscription_id, dummy_event]) relay.receive_data_from_relay(event_message) await asyncio.wait_for(any_event, timeout=0.5) self.assertEqual(dummy_event, any_event.result().to_json_object()) await asyncio.sleep(0.1) self.assertFalse(event_task.done(), msg="The task should still be running") event_task.cancel()