tag

device

8 indexed skills · max 10 per page

skills (8)

ios-device-automation

web-infra-dev/midscene-skills · Productivity

0

Vision-driven iOS automation using natural language commands and screenshot analysis. \n \n Operates entirely from screenshots without requiring DOM access or accessibility labels; can interact with any visible UI element regardless of technology stack \n Requires a configured vision model (Gemini, Qwen, Doubao, or similar) via environment variables for AI-powered screen understanding and action execution \n Follows a synchronous workflow: connect device, take screenshot, execute actions via nat

apple-on-device-ai

dpearson2699/swift-ios-skills · AI/ML

0

Deploy on-device AI across Apple platforms using Foundation Models, Core ML, MLX Swift, and llama.cpp. \n \n Choose Foundation Models for zero-setup text generation and structured output on iOS 26+; Core ML for custom vision and NLP models; MLX Swift for maximum throughput on Apple Silicon; llama.cpp for cross-platform GGUF inference \n Foundation Models includes session management, @Generable macros for type-safe structured output, tool calling, and streaming with always-enforced guardrails \n

ios-device-screenshot

0xbigboss/claude-code · Productivity

0

Take screenshots from physical iOS devices connected via USB using pymobiledevice3.

device-integrity

dpearson2699/swift-ios-skills · Productivity

0

Verify that requests to your server come from a genuine Apple device running your unmodified app. DeviceCheck provides per-device bits for simple flags (e.g., "claimed promo offer"). App Attest uses Secure Enclave keys and Apple attestation to cryptographically prove app legitimacy on each request.

harmonyos-device-automation

web-infra-dev/midscene-skills · Productivity

0

CRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:

android-device-automation

web-infra-dev/midscene-skills · Productivity

0

Vision-driven Android automation from screenshots, no DOM access required. \n \n Operates entirely from device screenshots using AI visual understanding; interacts with any visible UI element regardless of underlying technology stack \n Supports taps, swipes, text input, app launches, and complex multi-step interactions via natural language commands \n Requires pre-configured vision model (Gemini, Qwen, Doubao, or similar) with API credentials in environment variables \n Commands run synchronous

foundation-models-on-device

affaan-m/everything-claude-code · Productivity

0

On-device LLM integration for iOS 18+ using Apple's FoundationModels framework with privacy-first text generation and structured output. \n \n Covers text generation, structured output via @Generable macro, custom tool calling, and snapshot streaming—all running locally without cloud dependency \n Requires availability checks before session creation; supports single-turn and multi-turn conversations with optional system instructions \n Guided generation with @Guide constraints (numeric ranges, a

agent-device

callstackincubator/agent-device · Productivity

0

Automate iOS and Android app interactions with snapshot-based discovery and selector-driven replay. \n \n Supports iOS simulators/devices and Android emulators/devices with session-bound automation, multi-tenant remote daemon mode, and device-scope isolation for QA workflows \n Core commands: snapshot for UI discovery with refs, press / fill / scroll for interactions, open / close for app lifecycle, install / reinstall for binary deployment \n Includes utilities for logging, network inspection,