This chapter covers the basic concepts that you’ll need to understand in order to use Powder.
A profile encapsulates everything needed to run an experiment. It consists of two main parts: a description of the resources (hardware, storage, network, etc.) needed to run the experiment, and the software artifacts that run on those resources.
The resource specification is in the RSpec format. The RSpec describes an entire topology: this includes the nodes (hosts) that the software will run on, the storage that they are attached to, and the network that connects them. The nodes may be virtual machines or physical servers. The RSpec can specify the properties of these nodes, such as how much RAM they should have, how many cores, etc., or can directly reference a specific class of hardware available in one of Powder’s clusters. The network topology can include point to point links, LANs, etc. and may be either built from Ethernet or Infiniband.
The primary way that software is associated with a profile are through
disk images. A disk image (often just called an
“image”) is a block-level snapshot of the contents of a real or virtual
Profiles come from two sources: some are provided by Powder itself; these tend to be standard installations of popular operating systems and software stacks. Profiles may also be provided by Powder’s users, as a way for communities to share research artifacts.
Profiles in Powder may be on-demand profiles, which means that they are designed to be instantiated for a relatively short period of time (hours or days). Each person instantiating the profile gets their own experiment, so everyone using the profile is doing so independently on their own set of resources.
Powder also supports persistent profiles, which are longer-lived (weeks
or months) and are set up to be shared by multiple users. A persistent profile
can be thought of as a “testbed within a testbed”—
An instance of a cloud software stack, providing VMs to a large community
A cluster set up with a specific software stack for a class
A persistent instance of a database or other resource used by a large research community
Machines set up for a contest, giving all participants access to the same hardware
An HPC cluster temporarily brought up for the running of a particular set of jobs
A persistent profile may offer its own user interface, and its users may not necessarily be aware that they are using Powder. For example, a cloud-style profile might directly offer its own API for provisioning virtual machines. Or, an HPC-style persistent profile might run a standard cluster scheduler, which users interact with rather than the Powder website.
See the chapter on repeatability for more information on repeatable experimentation in Powder.
An experiment is an instantiation of a profile. An experiment uses resources, virtual or physical, on one or more of the clusters that Powder has access to. In most cases, the resources used by an experiment are devoted to the individual use of the user who instantiates the experiment. This means that no one else has an account, access to the filesystems, etc. In the case of experiments using solely physical machines, this also means strong performance isolation from all other Powder users. (The exceptions to this rule are persistent profiles, which may offer resources to many users.)
Running experiments on Powder consume real resources, which are limited. We ask that you be careful about not holding on to experiments when you are not actively using them. If you are are holding on to experiments because getting your working environment set up takes time, consider creating a profile.
The contents of local disk on nodes in an experiment are considered
All experiments have an expiration time. By default, the expiration time is short (a few hours), but users can use the “Extend” button on the experiment page to request an extension. A request for an extension must be accompanied by a short description that explains the reason for requesting an extension, which will be reviewed by Powder staff. You will receive email a few hours before your experiment expires reminding you to copy your data off or request an extension.
If you need more time to run an experiment, you may use the “Extend” button on the experiment’s page. You will be presented with a dialog that allows you to select how much longer you need the experiment. Longer time periods require more extensive appoval processes. Short extensions are auto-approved, while longer ones require the intervention of Powder staff or, in the case of indefinite extensions, the steering commitee.
Users are grouped into projects. A project is, roughly speaking, a group of people working together on a common research or educational goal. This may be people in a particular research lab, a distributed set of collaborators, instructors and students in a class, etc.
A project is headed by a project leader. We require that project leaders be faculty, senior research staff, or others in an authoritative position. This is because we trust the project leader to approve other members into the project, ultimately making them responsible for the conduct of the users they approve. If Powder staff have questions about a project’s activities, its use of resources, etc., these questions will be directed to the project leader. Some project leaders run a lot of experiments themselves, while some choose to approve accounts for others in the project, who run most of the experiments. Either style works just fine in Powder.
Permissions for some operations / objects depend on the project that they belong to. Currently, the only such permission is the ability to make a profile visible onto to the owning project. We expect to introduce more project-specific permissions features in the future.
Users of Powder may get exclusive, root-level control over physical machines. When allocated this way, no layers of virtualization or indirection get in the way of the way of performance, and users can be sure that no other users have access to the machines at the same time. This is an ideal situation for repeatable research.
Physical machines are re-imaged between users, so you can be sure that your physical machines don’t have any state left around from the previous user. You can find descriptions of the hardware in Powder’s clusters in the hardware chapter.
To support experiments that must scale to large numbers of nodes, Powder provides virtual nodes. A Powder virtual node is a virtual machine or container running on top of a regular operating system. If an experiment’s per-node CPU, memory and network requirements are modest, the use of virtual nodes allows an experiment to scale to a total node size that is a factor of tens or hundreds times as many nodes as there are available physical machines in Powder. Virtual nodes are also useful for prototyping experiments and debugging code without tying up significant amounts of physical resources.
Powder virtual nodes are based on the Xen hypervisor or Docker containers. With some limitations, virtual nodes can act in any role that a normal Powder node can: edge node, router, traffic generator, etc. You can run startup commands, remotely login over ssh, run software as root, use common networking tools like tcpdump or traceroute, modify routing tables, capture and load custom images, and reboot. You can construct arbitrary topologies of links and LANs mixing virtual and real nodes.
Powder supports the use of native Docker images (which use a different format than other Powder images). You can either use external, publicly-available images or Dockerfiles; or you can use and automatically create augmented disk images, which are external Docker images that are automatically repackaged with the Powder software and its dependencies, so that all Powder features can be supported inside the container.
Virtual nodes in Powder are hosted on either dedicated or shared physical machines. In dedicated mode, you may login to the physical machines hosting your VMs; in shared mode, no one else has access to your VMs, but there are other users on the same hardware whose activities may affect the performance of your VMs.
To learn how to allocate and configure virtual nodes, see the the advanced topics section.