2 private links
A simple tool to convert screenshots, mockups and Figma designs into clean, functional code using AI. Now supporting Claude Sonnet 3.5 and GPT-4O! Supported AI models: Claude Sonnet 3.5 - Best model! [...] We also just added experimental support for taking a video/screen recording of a website in action and turning that into a functional prototype. [...] Run the frontend: cd frontend yarn yarn dev Open http://localhost:5173 to use the app.
通过GPT4o和Claude 3.5 Sonnet免费体验实时、精准的AI对话。 Chat100.ai:免费体验OpenAI GPT-4o和Claude 3.5 GPT-4o 和 Claude 3.5 Sonnet 是先进的 AI 模型,专为提供快速、准确和智能回复而设计。 在 Chat100.ai,您可以免费使用 GPT-4o 和 Claude 3.5 Sonnet,无需登录。 Chat100.ai 提供流畅的 AI 聊天体验,是用户寻找高级 AI 支持的理想 ChatGPT 免费替代选择。
今天讲大模型,它的核心是scaling law(尺度定律),当llya Sutskever(OpenAI联合创始人、首席科学家伊尔亚·苏茨克维)比别人看得远的就是scaling law。 其实美国第一阶段和第二阶段中插了一个阶段,就是To B的软件公司,所有做SaaS的公司都在努力用大模型来提高软件能力,所以在ToB SaaS上,在productivity(生产力)这一块,这是美国优于中国的一个市场,这块已经有很多投入了。
#installation-with-pip-install Directly install from https://pypi.org/project/promplate/: pip install promplate[openai] We are using OpenAI just for demonstration. All the code below should run “as is”, which means you can copy and paste them in your terminal and it will work fine. >>> from promplate.llm.openai import ChatComplete # this simply wraps OpenAI's SDK >>> complete = ChatComplete(api_key="...") [...] >>> import time >>> from promplate import Template >>> greet = Template("Greet me. [...] You can initialize a Task with a string like initiating a Template: >>> from promplate import Node >>> greet = Node("Greet me.
AI Trading Prototype
This project uses OpenAI to determine the sentiment of cryptocurrency news headlines and subsequently executes orders on the Binance Spot Market.
It is composed of two main components: Sentiment Generator and Trading Bot.
https://www.warp.dev/#Smart & Reliable With knowledge sharing tools, autocompletions, and fully integrated AI, Warp is a more intelligent terminal — out of the box. Warp is built with Rust, rendered with Metal, and optimized for performance. [...] “Warp includes so many things I can’t live without — from clicking where you want the cursor to go, to AI autocorrect.” [...] I love how I can navigate the terminal editor just like a code editor with all the Move/Move-Select keymap.
https://github.com/joonspk-research/generative_agents/blob/main/cover.png This repository accompanies our research paper titled "https://arxiv.org/abs/2304.03442." It contains our core simulation module for generative agents—computational agents that simulate believable human behaviors—and their game environment. [...] To run a new simulation, you will need to concurrently start two servers: the environment server and the agent simulation server. [...] Ensure that the environment server continues to run while you are running the simulation, so keep this command-line tab open!
https://github.com/j178/chatgpt/releases A CLI for ChatGPT, powered by GPT-3.5-turbo and GPT-4 models. Get or create your OpenAI API Key from here: https://platform.openai.com/account/api-keys 💬 Start in chat mode [...] 💻 Use it in a pipeline cat config.yaml | chatgpt -p 'convert this yaml to json' echo "Hello, world" | chatgpt -p translator | say [...] You can add more prompts in the config file, for example: {"api_key": "sk-xxxxxx", "endpoint": "https://api.openai.com/v1", "prompts": {"default": "You are ChatGPT, a large language model trained by OpenAI. [...] "}, "conversation": {"prompt": "default", "context_length": 6, "model": "gpt-3.5-turbo", "stream": true, "max_tokens": 1024 }} then use -p flag to switch prompt: Note The prompt can be a predefined prompt, or come up with one on the fly.
https://i1.wp.com/a16z.com/wp-content/uploads/2023/05/stacks_of_books_and_GPU.png?ssl=1Source: Midjourney Research in artificial intelligence is increasing at an exponential rate. [...] https://jalammar.github.io/illustrated-stable-diffusion/: Introduction to latent diffusion models, the most common type of generative AI model for images. [...] New models https://arxiv.org/abs/1706.03762 (2017): The original transformer work and research paper from Google Brain that started it all. [...] Code generation https://arxiv.org/abs/2107.03374 (2021): This is OpenAI’s research paper for Codex, the code-generation model behind the GitHub Copilot product.
The more words you use, the better} else { const sourceLangCode = query.detectFrom const targetLangCode = query.detectTo const sourceLangName = lang.getLangName(sourceLangCode) const targetLangName = lang.getLangName(targetLangCode) console.debug('sourceLang', sourceLangName) console.debug('targetLang', targetLangName) const toChinese = chineseLangCodes.indexOf(targetLangCode) >= 0 const targetLangConfig = getLangConfig(targetLangCode) const sourceLangConfig = getLangConfig(sourceLangCode) console.log('Source language is', sourceLangConfig) rolePrompt = targetLangConfig.rolePrompt switch (query.action.mode) { case null: case undefined: if ((query.action.rolePrompt ?? '').includes('${text}') || (query.action.commandPrompt ?? '').includes('${text}')) { contentPrompt = '' } else { contentPrompt = '"""' + query.text + '"""' } rolePrompt = (query.action.rolePrompt ?? '') .replace('${sourceLang}', sourceLangName) .replace('${targetLang}', targetLangName) .replace('${text}', query.text) commandPrompt = (query.action.commandPrompt ?? '') .replace('${sourceLang}', sourceLangName) .replace('${targetLang}', targetLangName) .replace('${text}', query.text) if (query.action.outputRenderingFormat) { commandPrompt +=
. Format: ${query.action.outputRenderingFormat}} break case 'translate': quoteProcessor = new QuoteProcessor() commandPrompt = targetLangConfig.genCommandPrompt( sourceLangConfig, quoteProcessor.quoteStart, quoteProcessor.quoteEnd ) contentPrompt =
${quoteProcessor.quoteStart}${query.text}${quoteProcessor.quoteEnd}if (query.text.length [...] Only polish the text between ${quoteProcessor.quoteStart} and ${quoteProcessor.quoteEnd}.
contentPrompt = ${quoteProcessor.quoteStart}${query.text}${quoteProcessor.quoteEnd}
break case 'summarize': rolePrompt = "You are a professional text summarizer, you can only summarize the text, don't interpret it." [...] (status)}, onMessage: (msg) => { let resp try { resp = JSON.parse(msg) // eslint-disable-next-line no-empty } catch { query.onFinish('stop') return } if (!conversationId) { conversationId = resp.conversation_id } const { finish_details: finishDetails } = resp.message if (finishDetails) { query.onFinish(finishDetails.type) return } const { content, author } = resp.message if (author.role === 'assistant') { const targetTxt = content.parts.join('') let textDelta = targetTxt.slice(length) if (quoteProcessor) { textDelta = quoteProcessor.processText(textDelta)} query.onMessage({ content: textDelta, role: '', isWordMode }) length = targetTxt.length }}, onError: (err) => { if (err instanceof Error) { query.onError(err.message) return } if (typeof err === 'string') { query.onError(err) return } if (typeof err === 'object') { const { detail } = err if (detail) { const { message } = detail if (message) { query.onError(ChatGPT Web: ${message}
) return }} query.onError(ChatGPT Web: ${JSON.stringify(err)}
) return } const { error } = err if (error instanceof Error) { query.onError(error.message) return } if (typeof error === 'object') { const { message } = error if (message) { query.onError(message) return }} query.onError('Unknown error')}, }) if (conversationId) { await fetcher(${utils.defaultChatGPTWebAPI}/conversation/${conversationId}
, { method: 'PATCH', headers, body: JSON.stringify({ is_visible: false }), })} } else { const url = urlJoin(settings.apiURL, settings.apiURLPath) await fetchSSE(url, { method: 'POST', headers, body: JSON.stringify(body), signal: query.signal, onMessage: (msg) => { let resp try { resp = JSON.parse(msg) // eslint-disable-next-line no-empty } catch { query.onFinish('stop') return } const { choices } = resp if (!choices || choices.length === 0) { return { error: 'No result' }} const { finish_reason: finishReason } = choices if (finishReason) { query.onFinish(finishReason) return } let targetTxt = '' if (!isChatAPI) { // It's used for Azure OpenAI Service's legacy parameters. targetTxt = choices.text if (quoteProcessor) { targetTxt = quoteProcessor.processText(targetTxt)} query.onMessage({ content: targetTxt, role: '', isWordMode })} else { const { content = '', role } = choices.delta targetTxt = content if (quoteProcessor) { targetTxt = quoteProcessor.processText(targetTxt)} query.onMessage({ content: targetTxt, role, isWordMode })} }, onError: (err) => { if (err instanceof Error) { query.onError(err.message) return } if (typeof err === 'string') { query.onError(err) return } if (typeof err === 'object') { const { detail } = err if (detail) { query.onError(detail) return }} const { error } = err if (error instanceof Error) { query.onError(error.message) return } if (typeof error === 'object') { const { message } = error if (message) { query.onError(message) return }} query.onError('Unknown error')}, })} }
A Cloudflare worker script to proxy OpenAI‘s request to Azure OpenAI Service