2 private links
A CLI tool and an API for fetching data from Twitter for free! It is recommended to install the package globally, if you want to use it from the CLI. [...] User Timeline 'user' authentication (logging in) grants access to the following resources/actions: Tweet Details [...] The API_KEY generated by logging in is what allows Rettiwt-API to authenticate as a logged in user while interacting with the Twitter API ('user' authentication). [...] This API uses the cookies of a Twitter account to fetch data from Twitter and as such, there is always a chance (although a measly one) of getting the account banned by Twitter algorithm.
Marker converts PDF to markdown quickly and accurately. Supports a wide range of documents (optimized for books and scientific papers) [...] Here are some known limitations that are on the roadmap to address: Marker will not convert 100% of equations to LaTeX. [...] marker /path/to/input/folder /path/to/output/folder --workers 10 --max 10 --metadata_file /path/to/metadata.json --min_length 10000 --workers is the number of pdfs to convert at once. [...] Then run benchmark.py like this: python benchmark.py data/pdfs data/references report.json --nougat This will benchmark marker against other text extraction methods.
yt-dlp is a https://github.com/ytdl-org/youtube-dl fork based on the now inactive https://github.com/blackjack4494/yt-dlc. You can install yt-dlp using #release-files, https://pypi.org/project/yt-dlp or one using a third-party package manager. [...] # Download best format that contains video, # and if it doesn't already have an audio stream, merge it with best audio-only format $ yt-dlp -f "bv+ba/b" [...] # Download and merge the best format that has a video stream, # and the best 2 audio-only formats into one file $ yt-dlp -f "bv+ba+ba.2" --audio-multistreams [...] # Download the best mp4 video available, or the best video if no mp4 available $ yt-dlp -f "bv[ext=mp4]+ba[ext=m4a]/b[ext=mp4] / bv+ba/b"
Bun is a fast, all-in-one toolkit for running, building, testing, and debugging JavaScript and TypeScript, from a single file to a full-stack application. Install Bun curl curl -fsSL https://bun.sh/install | bash docker docker run --rm --init --ulimit memlock=-1:-1 oven/bun [...] import { plugin } from "bun"; plugin({ name: "YAML", async setup(build) { const { load } = await import("js-yaml"); const { readFileSync } = await import("fs"); build.onLoad({ filter: /. [...] import { test, expect } from "bun:test"; test("2 + 2", () => { expect(2 + 2).toBe(4); }); You can run your tests with the bun test command. [...] const release = await getRelease(); release.ts export async function getRelease(): Promise { const response = await fetch("https://api.github.com/repos/oven-sh/bun/releases/latest"); const { tag_name } = await response.json(); return tag_name; }
Upscayl lets you enlarge and enhance low-resolution images using advanced AI algorithms. Enlarge images without losing quality, it's almost like magic! [...] 🐧 Linux https://flathub.org/apps/org.upscayl.Upscayl https://appimage.github.io/Upscayl/ https://aur.archlinux.org/packages/upscayl-bin https://snapcraft.io/upscayl/ Upscayl should be available on the Software Store of most Linux operating systems. [...] Right Click AppImage -> Go to Permissions tab -> Check 'allow file to execute' and then double click the file to run Upscayl. [...] Upscayl uses AI models to enhance your images by guessing what the details could be.
https://twitter.com/kdrag0n/status/1638917691036803073Say goodbye to slow, clunky containers and VMs Seamless and efficient Docker and Linux on your Mac. [...] Starts in seconds with turbocharged networking, smooth Rosetta x86 emulation, VirtioFS file sharing, and other optimizations for some workloads. [...] Enjoy Docker as if it were native to macOS, plus CLI integration, file sharing, and remote SSH editing with Linux machines. [...] Connect between Linux machines and Docker containers, and use IPv6 and ICMP painlessly.
Download all your kindle books script. pip3 install -r requirements.txt python no_kindle.py -e ${email} -p ${password} [...] Cli 安装使用 python3 requirements python3 --version # 查看 python 版本 pip3 install kindle_download # 使用 pip 安装 git clone https://github.com/yihong0618/Kindle_download_helper.git && cd Kindle_download_helper pip3 install -r requirements.txt python kindle.py -h # 查看使用参数 usage: kindle.py [-h] [--cookie COOKIE | --cookie-file COOKIE_FILE] [--cn] [--jp] [--de] [--uk] [--resume-from INDEX] [--cut-length CUT_LENGTH] [-o OUTDIR] [-od OUTDEDRMDIR] [-s SESSION_FILE] [--pdoc] [--resolve_duplicate_names] [--readme] [--dedrm] [--list] [--device_sn DEVICE_SN] [--mode MODE] [csrf_token] positional arguments: csrf_token amazon or amazon cn csrf token optional arguments: -h, --help show this help message and exit --cookie COOKIE amazon or amazon cn cookie --cookie-file COOKIE_FILE load cookie local file --cn if your account is an amazon.cn account --jp if your account is an amazon.co.jp account --de if your account is an amazon.de account --uk if your account is an amazon.co.uk account --resume-from INDEX resume from the index if download failed --cut-length CUT_LENGTH truncate the file name dwonload output dir -od OUTDEDRMDIR, --outdedrmdir OUTDEDRMDIR dwonload output dedrm dir -s SESSION_FILE, --session-file SESSION_FILE The reusable session dump file --pdoc to download personal documents or ebook --resolve_duplicate_names Resolve duplicate names files to download --readme If you want to generate kindle readme stats --dedrm If you want to dedrm
directly --list just list books/pdoc, not to download --device_sn DEVICE_SN Download file for device with this serial number --mode MODE Mode of download, all : download all files at once, sel: download selected files 下载 kindle 书籍 [...] python3 kindle.py --cn ${csrfToken} # Both cookie and CSRF Token python3 kindle.py --cn --cookie ${cookie} ${csrfToken} 自动获取 cookie 如果你的运行环境是本机,项目可以使用 https://github.com/borisbabic/browser_cookie3 库自动从浏览器中获取 cookie。 使用 amazon.cn 登陆 amazon.cn 访问 https://www.amazon.cn/hz/mycd/myx#/home/content/booksAll/dateDsc/ 右键查看源码,搜索 csrfToken 复制后面的 value 执行 python3 kindle.py --cn 如果下载推送文件 python3 kindle.py --cn --pdoc 如果想直接 dedrm 解密 (不保证好用) python3 kindle.py --cn --pdoc --dedrm how to amazon.com login amazon.com visit https://www.amazon.com/hz/mycd/myx#/home/content/booksAll/dateDsc/ right click this page source then find csrfToken value copy run: python3 kindle.py if is doc file python3 kindle.py --pdoc how to amazon.de login amazon.de visit https://www.amazon.de/hz/mycd/myx#/home/content/booksAll/dateDsc/ right click this page source then find csrfToken value copy run: python3 kindle.py --de if is doc file python3 kindle.py --de --pdoc how to amazon.co.uk login amazon.co.uk visit https://www.amazon.co.uk/hz/mycd/myx#/home/content/booksAll/dateDsc/ right click this page source then find csrfToken value copy run: python3 kindle.py --uk if is doc file python3 kindle.py --uk --pdoc amazon.jp を使用する amazon.co.jp にログインする。
https://github.com/j178/chatgpt/releases A CLI for ChatGPT, powered by GPT-3.5-turbo and GPT-4 models. Get or create your OpenAI API Key from here: https://platform.openai.com/account/api-keys 💬 Start in chat mode [...] 💻 Use it in a pipeline cat config.yaml | chatgpt -p 'convert this yaml to json' echo "Hello, world" | chatgpt -p translator | say [...] You can add more prompts in the config file, for example: {"api_key": "sk-xxxxxx", "endpoint": "https://api.openai.com/v1", "prompts": {"default": "You are ChatGPT, a large language model trained by OpenAI. [...] "}, "conversation": {"prompt": "default", "context_length": 6, "model": "gpt-3.5-turbo", "stream": true, "max_tokens": 1024 }} then use -p flag to switch prompt: Note The prompt can be a predefined prompt, or come up with one on the fly.
Unified Model Serving Framework
🏹 Scalable with powerful performance optimizations
abstraction scales model inference separately from your custom code and multi-core CPU utilization with automatic provisioning [...]
We strip out as much potentially sensitive information as possible, and we will never collect user code, model data, model names, or stack traces.