2 private links
Moonshot AI(Kimi.ai)接口转API https://github.com/LLM-Red-Team/kimi-free-api 阿里通义 (Qwen) 接口转API https://github.com/LLM-Red-Team/qwen-free-api [...] Authorization: Bearer TOKEN1,TOKEN2,TOKEN3 [...] 构建你的 Web Service(New+ -> Build and deploy from a Git repository -> Connect你fork的项目 -> 选择部署区域 -> 选择实例类型为Free -> Create Web Service)。 npm i -g vercel --registry http://registry.npmmirror.com vercel login git clone https://github.com/LLM-Red-Team/deepseek-free-api cd deepseek-free-api vercel --prod
This project aims to achieve permanent use of Xcode LLM/Apple Intelligence on any Mac without disabling System Integrity Protection (SIP) or only disabling it once. Apple Intelligence is only supported on macOS XcodeLLM, Apple Intelligence and ChatGPT integration have been tested normally on Mac mini (M4 Pro, 2024) + macOS 15.2. [...] If you choose to use this project, you do so at your own risk and are responsible for compliance with any applicable laws. [...] # Override XcodeLLM only curl -L https://raw.githubusercontent.com/Kyle-Ye/XcodeLLMEligible/release/0.2/scripts/override.sh | bash -s -- install override xcodellm # Override Apple Intelligence only curl -L https://raw.githubusercontent.com/Kyle-Ye/XcodeLLMEligible/release/0.2/scripts/override.sh | bash -s -- install override greymatter # For Apple Intelligence + Cleanup curl -L https://raw.githubusercontent.com/Kyle-Ye/XcodeLLMEligible/release/0.2/scripts/override.sh | bash -s -- install override greymatter+strontium # For XcodeLLM + Apple Intelligence + Cleanup curl -L https://raw.githubusercontent.com/Kyle-Ye/XcodeLLMEligible/release/0.2/scripts/override.sh | bash -s -- install override xcodellm+greymatter+strontium Note [...] Other issue curl -L https://raw.githubusercontent.com/Kyle-Ye/XcodeLLMEligible/release/0.2/scripts/override.sh | bash -s -- doctor
通过GPT4o和Claude 3.5 Sonnet免费体验实时、精准的AI对话。 Chat100.ai:免费体验OpenAI GPT-4o和Claude 3.5 GPT-4o 和 Claude 3.5 Sonnet 是先进的 AI 模型,专为提供快速、准确和智能回复而设计。 在 Chat100.ai,您可以免费使用 GPT-4o 和 Claude 3.5 Sonnet,无需登录。 Chat100.ai 提供流畅的 AI 聊天体验,是用户寻找高级 AI 支持的理想 ChatGPT 免费替代选择。
今天讲大模型,它的核心是scaling law(尺度定律),当llya Sutskever(OpenAI联合创始人、首席科学家伊尔亚·苏茨克维)比别人看得远的就是scaling law。 其实美国第一阶段和第二阶段中插了一个阶段,就是To B的软件公司,所有做SaaS的公司都在努力用大模型来提高软件能力,所以在ToB SaaS上,在productivity(生产力)这一块,这是美国优于中国的一个市场,这块已经有很多投入了。
This book was created by @xiaolai with the help of ChatGPT and its TTS. The final work is available in https://github.com/xiaolai/most-common-american-idioms/blob/main/Most_Common_American_Idioms.html, and the audios can be played directly from the browser. The individual audio files are in the audio directory, and another version with combined files (every 10 idioms) is in the combined directory. Also, you can download a compiled version from https://pan.baidu.com/s/1zk9XrlIe26aELul2reXhIw?pwd=nqbj: https://pan.baidu.com/s/1zk9XrlIe26aELul2reXhIw?pwd=nqbj Moreover, you can use the https://1000h.org/enjoy-app/ from the https://1000h.org project to practice with the audio files from this book.
喜欢这个插件的小伙伴,可以给我的GITHUB项目 https://github.com/xcanwin/KeepChatGPT 点个⭐️STAR支持一下。 序号 截图 1 亮色主题 + 净化页面 https://github.com/xcanwin/KeepChatGPT/blob/main/assets/index_light.png 2 亮色主题 + 明察秋毫 + 展示大屏 + 日新月异 https://github.com/xcanwin/KeepChatGPT/blob/main/assets/chat_light.png 3 暗色主题 + 明察秋毫 + 展示大屏 + 日新月异 https://github.com/xcanwin/KeepChatGPT/blob/main/assets/chat_dark.png 4 移动端 + 净化页面 https://github.com/xcanwin/KeepChatGPT/blob/main/assets/index_mobile.png [...] If this issue persists please contact us through our help center at help.openai.com. [...] 打开Firefox > 右下角... > 附加组件 > 附加组件管理器 > Tampermonkey右边的+; 访问 https://chat.openai.com/chat; #使用方法-ios系统 [...] 紫色气泡 GPT4 模型 紫色气泡 + m GPT4 Mobile 模型 紫色气泡 + w GPT4 Web Browsing 模型 紫色气泡 + p GPT4 Plugins 模型 紫色气泡 + d GPT4 Code Interpreter 模型
https://github.com/jianchang512/pyvideotrans/blob/main/README_EN.md / https://github.com/jianchang512/pyvideotrans/blob/main/about.md / Q群 905857759 / 微信公众号:搜一搜“ pyvideotrans ” 语音识别支持 faster-whisper模型 openai-whisper模型 和 GoogleSpeech zh_recogn阿里中文语音识别模型. [...] 文字合成语音支持 Microsoft Edge tts Google tts Azure AI TTS Openai TTS Elevenlabs TTS 自定义TTS服务器api GPT-SoVITS https://github.com/jianchang512/clone-voice 允许保留背景伴奏音乐等(基于uvr5) [...] 打开终端窗口,分别执行如下命令 brew install libsndfile brew install ffmpeg brew install git brew install python@3.12 [...] 终端中执行命令 git clone https://github.com/jianchang512/pyvideotrans 执行命令 cd pyvideotrans 继续执行 python -m venv venv 继续执行命令 source./venv/bin/activate,执行完毕查看确认终端命令提示符已变成已(venv)开头,以下命令必须确定终端提示符是以(venv)开头 执行 pip install -r requirements.txt --no-deps,如果提示失败,执行如下2条命令切换pip镜像到阿里镜像 pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/ pip config set install.trusted-host mirrors.aliyun.com 然后重新执行 如果已切换到阿里镜像源,仍提示失败,请尝试执行 pip install -r requirements.txt --ignore-installed --no-deps python sp.py 打开软件界面 https://pyvideotrans.com/mac.html
FreeAskInternet is a completely free, private and locally running search aggregator & answer generate using LLM, without GPU needed. The user can ask a question and the system will make a multi engine search and combine the search result to the ChatGPT3.5 LLM and generate the answer based on search results.
Resources
基于微软 New Bing 用 Vue3 和 Go 简单定制的微软 New Bing 演示站点,拥有一致的 UI 体验,支持 ChatGPT 提示词,国内可用,基本兼容微软 Bing AI 所有功能,无需登录即可畅聊。 可用 ModHeader 添加 X-Forwarded-For 请求头,对应 URL 是 wss://sydney.bing.com/sydney/ChatHub,具体可参考 https://github.com/adams549659584/go-proxy-bingai/issues/71 及 https://zhuanlan.zhihu.com/p/606655303 ⭐ 需要画图等高级功能时(需选更有创造力模式或右上角 设置 => 图像创建 ),可登录微软账号设置用户 Cookie 进行体验 ⭐ 遇到一切问题,先点左下角 https://github.com/adams549659584/go-proxy-bingai/blob/master/docs/img/bing-clear.png 试试,不行使用刷新大法(Shift + F5 或 Ctrl + Shift + R 或 右上角设置中的一键重置),最终大招就 清理浏览器缓存 及 Cookie ,比如(24 小时限制、未登录提示等等) #go-proxy-bing [...] 部署 ⭐ 需 https 域名 (自行配置 nginx 等) (前后端都有限制 只有在HTTPS的情况下,浏览器 Accept-Encoding 才会包含 br , localhost 除外) 支持 Linux (amd64 / arm64)、Windows (amd64 / arm64) 国内机器部署可配置 socks 环境变量
https://github.com/j178/chatgpt/releases A CLI for ChatGPT, powered by GPT-3.5-turbo and GPT-4 models. Get or create your OpenAI API Key from here: https://platform.openai.com/account/api-keys 💬 Start in chat mode [...] 💻 Use it in a pipeline cat config.yaml | chatgpt -p 'convert this yaml to json' echo "Hello, world" | chatgpt -p translator | say [...] You can add more prompts in the config file, for example: {"api_key": "sk-xxxxxx", "endpoint": "https://api.openai.com/v1", "prompts": {"default": "You are ChatGPT, a large language model trained by OpenAI. [...] "}, "conversation": {"prompt": "default", "context_length": 6, "model": "gpt-3.5-turbo", "stream": true, "max_tokens": 1024 }} then use -p flag to switch prompt: Note The prompt can be a predefined prompt, or come up with one on the fly.
https://i1.wp.com/a16z.com/wp-content/uploads/2023/05/stacks_of_books_and_GPU.png?ssl=1Source: Midjourney Research in artificial intelligence is increasing at an exponential rate. [...] https://jalammar.github.io/illustrated-stable-diffusion/: Introduction to latent diffusion models, the most common type of generative AI model for images. [...] New models https://arxiv.org/abs/1706.03762 (2017): The original transformer work and research paper from Google Brain that started it all. [...] Code generation https://arxiv.org/abs/2107.03374 (2021): This is OpenAI’s research paper for Codex, the code-generation model behind the GitHub Copilot product.
The more words you use, the better} else { const sourceLangCode = query.detectFrom const targetLangCode = query.detectTo const sourceLangName = lang.getLangName(sourceLangCode) const targetLangName = lang.getLangName(targetLangCode) console.debug('sourceLang', sourceLangName) console.debug('targetLang', targetLangName) const toChinese = chineseLangCodes.indexOf(targetLangCode) >= 0 const targetLangConfig = getLangConfig(targetLangCode) const sourceLangConfig = getLangConfig(sourceLangCode) console.log('Source language is', sourceLangConfig) rolePrompt = targetLangConfig.rolePrompt switch (query.action.mode) { case null: case undefined: if ((query.action.rolePrompt ?? '').includes('${text}') || (query.action.commandPrompt ?? '').includes('${text}')) { contentPrompt = '' } else { contentPrompt = '"""' + query.text + '"""' } rolePrompt = (query.action.rolePrompt ?? '') .replace('${sourceLang}', sourceLangName) .replace('${targetLang}', targetLangName) .replace('${text}', query.text) commandPrompt = (query.action.commandPrompt ?? '') .replace('${sourceLang}', sourceLangName) .replace('${targetLang}', targetLangName) .replace('${text}', query.text) if (query.action.outputRenderingFormat) { commandPrompt +=
. Format: ${query.action.outputRenderingFormat}} break case 'translate': quoteProcessor = new QuoteProcessor() commandPrompt = targetLangConfig.genCommandPrompt( sourceLangConfig, quoteProcessor.quoteStart, quoteProcessor.quoteEnd ) contentPrompt =
${quoteProcessor.quoteStart}${query.text}${quoteProcessor.quoteEnd}if (query.text.length [...] Only polish the text between ${quoteProcessor.quoteStart} and ${quoteProcessor.quoteEnd}.
contentPrompt = ${quoteProcessor.quoteStart}${query.text}${quoteProcessor.quoteEnd}
break case 'summarize': rolePrompt = "You are a professional text summarizer, you can only summarize the text, don't interpret it." [...] (status)}, onMessage: (msg) => { let resp try { resp = JSON.parse(msg) // eslint-disable-next-line no-empty } catch { query.onFinish('stop') return } if (!conversationId) { conversationId = resp.conversation_id } const { finish_details: finishDetails } = resp.message if (finishDetails) { query.onFinish(finishDetails.type) return } const { content, author } = resp.message if (author.role === 'assistant') { const targetTxt = content.parts.join('') let textDelta = targetTxt.slice(length) if (quoteProcessor) { textDelta = quoteProcessor.processText(textDelta)} query.onMessage({ content: textDelta, role: '', isWordMode }) length = targetTxt.length }}, onError: (err) => { if (err instanceof Error) { query.onError(err.message) return } if (typeof err === 'string') { query.onError(err) return } if (typeof err === 'object') { const { detail } = err if (detail) { const { message } = detail if (message) { query.onError(ChatGPT Web: ${message}
) return }} query.onError(ChatGPT Web: ${JSON.stringify(err)}
) return } const { error } = err if (error instanceof Error) { query.onError(error.message) return } if (typeof error === 'object') { const { message } = error if (message) { query.onError(message) return }} query.onError('Unknown error')}, }) if (conversationId) { await fetcher(${utils.defaultChatGPTWebAPI}/conversation/${conversationId}
, { method: 'PATCH', headers, body: JSON.stringify({ is_visible: false }), })} } else { const url = urlJoin(settings.apiURL, settings.apiURLPath) await fetchSSE(url, { method: 'POST', headers, body: JSON.stringify(body), signal: query.signal, onMessage: (msg) => { let resp try { resp = JSON.parse(msg) // eslint-disable-next-line no-empty } catch { query.onFinish('stop') return } const { choices } = resp if (!choices || choices.length === 0) { return { error: 'No result' }} const { finish_reason: finishReason } = choices if (finishReason) { query.onFinish(finishReason) return } let targetTxt = '' if (!isChatAPI) { // It's used for Azure OpenAI Service's legacy parameters. targetTxt = choices.text if (quoteProcessor) { targetTxt = quoteProcessor.processText(targetTxt)} query.onMessage({ content: targetTxt, role: '', isWordMode })} else { const { content = '', role } = choices.delta targetTxt = content if (quoteProcessor) { targetTxt = quoteProcessor.processText(targetTxt)} query.onMessage({ content: targetTxt, role, isWordMode })} }, onError: (err) => { if (err instanceof Error) { query.onError(err.message) return } if (typeof err === 'string') { query.onError(err) return } if (typeof err === 'object') { const { detail } = err if (detail) { query.onError(detail) return }} const { error } = err if (error instanceof Error) { query.onError(error.message) return } if (typeof error === 'object') { const { message } = error if (message) { query.onError(message) return }} query.onError('Unknown error')}, })} }
A Cloudflare worker script to proxy OpenAI‘s request to Azure OpenAI Service
大家都非常喜欢 ChatGPT 式快速阅读 Pdf 的工具,现在更专业的竞争对手来了
卖点和 PandaGPT 一样,即一边用 AI 快速阅读、一边对照原文,官网号称研究论文快 100 倍
官方说原理是用 vector embeddings 搜索,用 GPT 实现自然语言人机交互
看似设计很专业👍