Grafilab
  • Getting Started
  • Whitepaper
    • Introduction
    • Market Overview
    • Challenges
    • Goal
    • What Are We Building?
    • Grafi Cloud
      • Core Features
      • How It Works
      • Payment and Incentives
      • Advanced Features
    • Grafi Co-Builder
      • Main Services of Co-Builder
      • How It Works
      • Supporting Large-Language-Model
      • Supporting Framework
    • Grafi AI App Store
      • Key Features of the Grafi AI App Store
    • Grafi AI Data Layer
      • Key Components of the Data Layer
      • How the Data Layer Works
    • Ecosystem (Flywheel)
      • Revenue Streams of Grafilab
      • Grafi's Universal Payment and Burning Mechanism
    • Token Info
      • Token Distribution and Allocations
      • Token Utility
      • Token Emission
  • Roadmap
Powered by GitBook
On this page
  1. Whitepaper
  2. Grafi Cloud

Core Features

Supply GPU with GRAFI Agent

  • For Contributors: Share idle GPU with the Grafi Cloud network and earn rewards.

  • Key Features:

    • Cross-Platform Support: Available for both Windows and Ubuntu operating systems, ensuring compatibility with most GPU setups.

    • Automated Resource Management: Grafi Agent handles GPU resource allocation, task scheduling, and data exchange without requiring manual intervention.

    • Secure Integration: Uses Web3 authentication and smart contracts to ensure secure GPU contribution and revenue sharing.

    • Earnings and Incentives: Contributors receive revenue based on GPU performance, uptime, and availability, with an additional $GRAFI token incentive for DePIN contributors.

Rent GPU Power with Rent & Run

  • For Renters: Access a wide array of GPU resources tailored for tasks like AI model training, gaming, rendering, and data analytics.

  • Key Features:

    • Flexible GPU Options: Choose from centralized GPUs in data centers (CePIN) or decentralized GPUs contributed by individuals (DePIN).

    • Pre-Configured Environments: Renters can select from Docker templates preloaded with popular AI frameworks like TensorFlow, PyTorch, or OpenCV for faster setup.

    • Custom Configurations: Users can upload and run custom Docker images for specialized workflows.

    • Real-Time Monitoring: Track GPU performance, task completion status, and resource utilization through a user-friendly dashboard.

    • Transparent Billing: Costs are calculated based on GPU usage time, ensuring clear and predictable pricing.

PreviousGrafi CloudNextHow It Works

Last updated 4 months ago