use▌
15 indexed skills · max 10 per page
computer-use-agents
sickn33/antigravity-awesome-skills · Productivity
AI agents that perceive screens, reason about actions, and control computers like humans do. \n \n Implements the perception-reasoning-action loop: capture screenshot, analyze with vision-language model, execute mouse/keyboard operations, repeat \n Covers Anthropic's Computer Use (Claude 3.5 Sonnet and Opus 4.5), with tool support for screenshots, mouse/keyboard control, bash execution, and file editing \n Requires sandboxed environments (Docker containers with virtual desktops) to isolate agent
nextjs-use-search-params-suspense
wsimmonds/claude-nextjs-skills · Frontend
The useSearchParams hook requires TWO things:
use-agently
agentlyhq/use-agently · Productivity
use-agently is the CLI for Agently — a marketplace for AI agents. It is designed to be operated by AI agents as a first-class use case.
use-cases-page-generator
kostja94/marketing-skills · Productivity
Guides use case pages that bridge product features and real-world customer problems. Scenario-first is the primary organization. BOFU (bottom-of-funnel) pages for SaaS/B2B. Answer "when would I use it?" and "how does it help me?" — distinct from solutions (industry/outcome).
computer-use-agents
davila7/claude-code-templates · Productivity
The fundamental architecture of computer use agents: observe screen, reason about next action, execute action, repeat. This loop integrates vision models with action execution through an iterative pipeline.
figma-use
figma/mcp-server-guide · Productivity
Use the use_figma tool to execute JavaScript in Figma files via the Plugin API. All detailed reference docs live in references/.
react-use
hairyf/skills · Frontend
Essential React Hooks for sensors, UI, animations, side-effects, lifecycles, and state management. \n \n 30+ sensor hooks track browser APIs and device interfaces including geolocation, keyboard input, scroll position, network state, and device motion \n 9 UI hooks manage audio, video, fullscreen, drag-and-drop, speech synthesis, and click-away detection \n 8 animation hooks provide requestAnimationFrame loops, intervals, timeouts, spring dynamics, and tweening \n 16 side-effect hooks handle asy
gemini-computer-use
am-will/codex-skills · Productivity
Gemini 2.5 Computer Use browser automation with Playwright-based agent loops and safety confirmations. \n \n Implements a screenshot-to-action cycle: capture screen, send to Gemini, parse function calls, execute in Playwright, return results until task completion or turn limit \n Supports multiple browser options: bundled Chromium (default), Chrome/Edge channels via COMPUTER_USE_BROWSER_CHANNEL , or custom executables like Brave \n Includes safety confirmation workflow that prompts users before
google-search-browser-use
grasseed/google-search-browser-use · Backend
Google searches via real browser sessions, extracting live results while reusing logged-in credentials to minimize CAPTCHAs. \n \n Launches Google searches in real browser mode to leverage existing user sessions and reduce bot detection blocks \n Provides commands to inspect search results, click through to individual pages, and extract content summaries with source citations \n Includes fallback to Jina AI text extraction if browser parsing encounters difficulties \n Requires browser-use instal
os-use
zrong/skills · Productivity
A comprehensive cross-platform toolkit for OS automation, screenshot capture, visual recognition, mouse/keyboard control, and window management. Supports macOS 12+ and Windows 10+.