On this page:
The POWDER Manual
2024-01-04 (93db12d)

The POWDER Manual

The Powder Team

Powder is a facility for experimenting on the future of wireless networking in a city-scale "living laboratory" environment.

Powder is run by the University of Utah in partnership with Salt Lake City and the Utah Education and Telehealth Network.

The Powder facility is built on top of Emulab and is run by the Flux Research Group, part of the School of Computing at the University of Utah.

    1 Powder Overview

      1.1 What is Powder

      1.2 Powder status

      1.3 Roadmap to using Powder

    2 Getting Started

      2.1 Next Steps

    3 Powder Over-the-air operation

      3.1 Overview

      3.2 Spectrum use

        3.2.1 BRS Channel Use Information

      3.3 Step-by-step walkthrough

      3.4 Next Steps

    4 Powder starter profiles

    5 Powder Users

      5.1 Register for an Account

        5.1.1 Join an existing project

        5.1.2 Create a new project

        5.1.3 Setting up SSH access

        5.1.4 Setting up X11

    6 Powder and Repeatable Research

    7 Basic Concepts

      7.1 Profiles

        7.1.1 On-demand Profiles

        7.1.2 Persistent Profiles

      7.2 Experiments

        7.2.1 Extending Experiments

      7.3 Projects

      7.4 Physical Machines

      7.5 Virtual Machines

    8 Creating Profiles

      8.1 Creating a profile from an existing one

        8.1.1 Preparation and precautions

        8.1.2 Cloning a Profile

        8.1.3 Copying a Profile

        8.1.4 Creating the Profile

        8.1.5 Updating a profile

      8.2 Repository-Based Profiles

        8.2.1 Updating Repository-Based Profiles

        8.2.2 Branches and Tags in Repository-Based Profiles

      8.3 Creating a profile from scratch

      8.4 Sharing Profiles

      8.5 Versioned Profiles

    9 Resource Reservations

      9.1 What Reservations Guarantee

      9.2 How Reservations May Affect You

      9.3 Making a Reservation

      9.4 Using a Reservation

      9.5 Who Shares Access to Reservations

    10 Describing a profile with python and geni-lib

      10.1 A single XEN VM node

      10.2 A single physical host

      10.3 Two XenVM nodes with a link between them

      10.4 Two ARM64 servers in a LAN

      10.5 A VM with a custom size

      10.6 Set a specific IP address on each node

      10.7 RF communication

      10.8 Specify an operating system and set install and execute scripts

      10.9 Profiles with user-specified parameters

      10.10 Add storage to a node

      10.11 Debugging geni-lib profile scripts

    11 Storage Mechanisms

      11.1 Overview of Storage Mechanisms

      11.2 Node-Local Storage

        11.2.1 Specifying Storage in a Profile – Local Datasets

        11.2.2 Allocating Storage in a Running Experiment

        11.2.3 Persisting Local Data

      11.3 Image-backed Datasets

      11.4 Remote Datasets

      11.5 NFS Shared Filesystems

      11.6 Write-back Storage for Fixed and Mobile Endpoints

      11.7 High-performance Storage for Streaming Data

        11.7.1 Using CephFS in an Experiment

        11.7.2 Limitations and Guidelines for CephFS Usage.

      11.8 Storage Type Summary (TL;DR)

      11.9 Example Storage Profiles

        11.9.1 Creating a Node-local Dataset

        11.9.2 Creating an Image-backed Dataset from a Node-local Dataset

        11.9.3 Using and Updating an Image-backed Dataset

        11.9.4 Creating a Remote Dataset

        11.9.5 Using a Remote Dataset on a Single Node

        11.9.6 Using a Remote Dataset on Multiple Nodes via a Shared Filesystem

        11.9.7 Using a Remote Dataset on Multiple Nodes via Clones

    12 Advanced Topics

      12.1 Disk Images

      12.2 RSpecs

      12.3 Public IP Access

        12.3.1 Dynamic Public IP Addresses

      12.4 Markdown

      12.5 Introspection

        12.5.1 Client ID

        12.5.2 Control MAC

        12.5.3 Manifest

        12.5.4 Private key

        12.5.5 Profile parameters

      12.6 User-controlled switches and layer-1 topologies

      12.7 Portal API

    13 Virtual Machines

      13.1 Xen VMs

        13.1.1 Controlling CPU and Memory

        13.1.2 Controlling Disk Space

        13.1.3 Setting HVM Mode

        13.1.4 Dedicated and Shared VMs

    14 Hardware

      14.1 Base-station resources

      14.2 Dense-deployment Base-station resources

      14.3 Fixed-endpoint resources

      14.4 Skylark IRIS Endpoints

      14.5 Near-edge computing resources

      14.6 Cloud computing resources

    15 Powder basic srsLTE Tutorial

      15.1 Objectives

      15.2 Prerequisites

      15.3 Logging In

      15.4 Creating a simple srsLTE experiment

      15.5 Exploring Your Experiment

        15.5.1 Experiment Status

        15.5.2 Profile Instructions

        15.5.3 Topology View

        15.5.4 List View

        15.5.5 Manifest View

        15.5.6 Graphs View

        15.5.7 Actions

        15.5.8 Web-based Shell

      15.6 Using the srsLTE tools

      15.7 Digging deeper

      15.8 Terminating the Experiment

      15.9 Taking Next Steps

    16 Powder OAI Tutorial

      16.1 Objectives

      16.2 Prerequisites

      16.3 Logging In

      16.4 Building Your Own OAI Network

      16.5 Exploring Your Experiment

        16.5.1 Experiment Status

        16.5.2 Profile Instructions

        16.5.3 Topology View

        16.5.4 List View

        16.5.5 Manifest View

        16.5.6 Graphs View

        16.5.7 Actions

        16.5.8 Web-based Shell

      16.6 Starting OAI Services

      16.7 Connecting the UE

      16.8 In-Depth OAI Profile Documentation

      16.9 Terminating the Experiment

      16.10 Taking Next Steps

    17 Powder OpenStack Tutorial

      17.1 Objectives

      17.2 Prerequisites

      17.3 Logging In

      17.4 Building Your Own OpenStack Cloud

      17.5 Exploring Your Experiment

        17.5.1 Experiment Status

        17.5.2 Profile Instructions

        17.5.3 Topology View

        17.5.4 List View

        17.5.5 Manifest View

        17.5.6 Graphs View

        17.5.7 Actions

        17.5.8 Web-based Shell

        17.5.9 Serial Console

      17.6 Bringing up Instances in OpenStack

      17.7 Administering OpenStack

        17.7.1 Log Into The Control Nodes

        17.7.2 Reboot the Compute Node

      17.8 Terminating the Experiment

      17.9 Taking Next Steps

    18 Citing Powder

    19 Getting Help