https://twitter.com/loose_drawing https://discord.gg/uUB6DyYZC5 https://loosedrawing.com/ https://loosedrawing.com/pastwork/ https://loosedrawing.com/terms/ https://loosedrawing.com/contact/ https://loosedrawing.com/privacy/ [メディア] -https://loosedrawing.com/media/ [協力サイト] -https://leadingtech.co.jp/ -https://smtsys.jp/ -https://knowle.jp/ -https://legalconnect.jp/ -https://neocollege.jp/ -https://zen-solutions.jp/ [提携イラストサイト] -https://tyoudoii-illust.com/ -https://www.shigureni.com/ © Loose Drawing
https://i1.wp.com/a16z.com/wp-content/uploads/2023/05/stacks_of_books_and_GPU.png?ssl=1Source: Midjourney Research in artificial intelligence is increasing at an exponential rate. [...] https://jalammar.github.io/illustrated-stable-diffusion/: Introduction to latent diffusion models, the most common type of generative AI model for images. [...] New models https://arxiv.org/abs/1706.03762 (2017): The original transformer work and research paper from Google Brain that started it all. [...] Code generation https://arxiv.org/abs/2107.03374 (2021): This is OpenAI’s research paper for Codex, the code-generation model behind the GitHub Copilot product.
Powered by open source tech Built at Google to scale YouTube.com to billions of users, Vitess is the world’s most scalable open source database. [...] Our team wants to focus on helping our customers meet their health and fitness goals, not the database servers.” [...] Make your PlanetScale data accessible across your organization by safely extracting and loading data from PlanetScale into other databases, such as BigQuery, Snowflake, or Redshift. Extract data to other sources Supports analytical workflows
The more words you use, the better} else { const sourceLangCode = query.detectFrom const targetLangCode = query.detectTo const sourceLangName = lang.getLangName(sourceLangCode) const targetLangName = lang.getLangName(targetLangCode) console.debug('sourceLang', sourceLangName) console.debug('targetLang', targetLangName) const toChinese = chineseLangCodes.indexOf(targetLangCode) >= 0 const targetLangConfig = getLangConfig(targetLangCode) const sourceLangConfig = getLangConfig(sourceLangCode) console.log('Source language is', sourceLangConfig) rolePrompt = targetLangConfig.rolePrompt switch (query.action.mode) { case null: case undefined: if ((query.action.rolePrompt ?? '').includes('${text}') || (query.action.commandPrompt ?? '').includes('${text}')) { contentPrompt = '' } else { contentPrompt = '"""' + query.text + '"""' } rolePrompt = (query.action.rolePrompt ?? '') .replace('${sourceLang}', sourceLangName) .replace('${targetLang}', targetLangName) .replace('${text}', query.text) commandPrompt = (query.action.commandPrompt ?? '') .replace('${sourceLang}', sourceLangName) .replace('${targetLang}', targetLangName) .replace('${text}', query.text) if (query.action.outputRenderingFormat) { commandPrompt +=. Format: ${query.action.outputRenderingFormat}} break case 'translate': quoteProcessor = new QuoteProcessor() commandPrompt = targetLangConfig.genCommandPrompt( sourceLangConfig, quoteProcessor.quoteStart, quoteProcessor.quoteEnd ) contentPrompt =${quoteProcessor.quoteStart}${query.text}${quoteProcessor.quoteEnd}if (query.text.length [...] Only polish the text between ${quoteProcessor.quoteStart} and ${quoteProcessor.quoteEnd}. contentPrompt = ${quoteProcessor.quoteStart}${query.text}${quoteProcessor.quoteEnd} break case 'summarize': rolePrompt = "You are a professional text summarizer, you can only summarize the text, don't interpret it." [...] (status)}, onMessage: (msg) => { let resp try { resp = JSON.parse(msg) // eslint-disable-next-line no-empty } catch { query.onFinish('stop') return } if (!conversationId) { conversationId = resp.conversation_id } const { finish_details: finishDetails } = resp.message if (finishDetails) { query.onFinish(finishDetails.type) return } const { content, author } = resp.message if (author.role === 'assistant') { const targetTxt = content.parts.join('') let textDelta = targetTxt.slice(length) if (quoteProcessor) { textDelta = quoteProcessor.processText(textDelta)} query.onMessage({ content: textDelta, role: '', isWordMode }) length = targetTxt.length }}, onError: (err) => { if (err instanceof Error) { query.onError(err.message) return } if (typeof err === 'string') { query.onError(err) return } if (typeof err === 'object') { const { detail } = err if (detail) { const { message } = detail if (message) { query.onError(ChatGPT Web: ${message}) return }} query.onError(ChatGPT Web: ${JSON.stringify(err)}) return } const { error } = err if (error instanceof Error) { query.onError(error.message) return } if (typeof error === 'object') { const { message } = error if (message) { query.onError(message) return }} query.onError('Unknown error')}, }) if (conversationId) { await fetcher(${utils.defaultChatGPTWebAPI}/conversation/${conversationId}, { method: 'PATCH', headers, body: JSON.stringify({ is_visible: false }), })} } else { const url = urlJoin(settings.apiURL, settings.apiURLPath) await fetchSSE(url, { method: 'POST', headers, body: JSON.stringify(body), signal: query.signal, onMessage: (msg) => { let resp try { resp = JSON.parse(msg) // eslint-disable-next-line no-empty } catch { query.onFinish('stop') return } const { choices } = resp if (!choices || choices.length === 0) { return { error: 'No result' }} const { finish_reason: finishReason } = choices if (finishReason) { query.onFinish(finishReason) return } let targetTxt = '' if (!isChatAPI) { // It's used for Azure OpenAI Service's legacy parameters. targetTxt = choices.text if (quoteProcessor) { targetTxt = quoteProcessor.processText(targetTxt)} query.onMessage({ content: targetTxt, role: '', isWordMode })} else { const { content = '', role } = choices.delta targetTxt = content if (quoteProcessor) { targetTxt = quoteProcessor.processText(targetTxt)} query.onMessage({ content: targetTxt, role, isWordMode })} }, onError: (err) => { if (err instanceof Error) { query.onError(err.message) return } if (typeof err === 'string') { query.onError(err) return } if (typeof err === 'object') { const { detail } = err if (detail) { query.onError(detail) return }} const { error } = err if (error instanceof Error) { query.onError(error.message) return } if (typeof error === 'object') { const { message } = error if (message) { query.onError(message) return }} query.onError('Unknown error')}, })} }
Master, help us to awaken and enlighten. Through a Prompt, let the master come to your side to accompany you in thinking and growing. [...] Of course, I (GPT) can explain the commands in English: /help: Lists all the commands, descriptions, and rules I recognize. [...] /role: Lists all available master roles. [...] If you'd like a round table discussion involving multiple roles, you can list multiple roles after the command.
这绝对是我近期用过最丝滑的效率工具,一句话,操作指南生成器!做交互教程的小伙伴必备!
在PC页面上真实操作一遍,它可以自动记录所有的操作流程,并做相应的截图和注解文档。
目前,只能生成英文指南,翻译一下就行。相比手动截图和写文档,太省心了。
Tango instantly turns what you know into step-by-step guidance—no videos, meetings, or screen shares required.
Mem understands what you tell it and save to help you connect, organize, and remember the most important things in your life. Not when you have Mem to talk to. [...] Save notes, links, tweets, and more with ease, so that even when you forget something, Mem won't. Keep track of your day and create notes directly linked to your calendar with one click. [...] Use our AI-powered Smart Search or ask Mem a question to bring whatever you're looking for back to top-of-mind.
aliyun-gpushare scheduler
A Cloudflare worker script to proxy OpenAI‘s request to Azure OpenAI Service
Claims free games periodically on This will run node epic-games; node prime-gaming; node gog - if you only want to claim games for one of the stores, you can override the default command by appending e.g. [...] Data (including json files with claimed games, codes to redeem, screenshots) is stored in the Docker volume fgc. [...] When running the first time, you have to login for each store you want to claim games on. [...] Claiming the Amazon Games works out-of-the-box, however, for games on external stores you need to either link your account or redeem a key.
Unified Model Serving Framework
🏹 Scalable with powerful performance optimizations
abstraction scales model inference separately from your custom code and multi-core CPU utilization with automatic provisioning [...]
We strip out as much potentially sensitive information as possible, and we will never collect user code, model data, model names, or stack traces.