On this page:
15.1 Conducted attenuator matrix resources
15.2 Paired Radio workbench resources
15.3 Indoor OTA lab resources
15.4 Rooftop Base-station resources
15.5 Dense-deployment Base-station resources
15.6 Fixed-endpoint resources
15.7 Mobile-endpoint resources
15.8 Near-edge computing resources
15.9 Cloud computing resources
2025-01-16 (6c92194)

15 Hardware and Wireless Environments

As described in the Using Powder section, Powder provides a wide variety of hardware and configured environments for carrying out wireless research, including conducted and over-the-air (OTA) RF resources as well as backend computation resources. These resources are spread across the University of Utah campus as shown in the Powder Map. Conducted RF resources are largely in the MEB datacenter, OTA resources all across campus, "near edge" compute nodes in the MEB and Fort datacenters, and additional "cloud" compute nodes in the Downtown datacenter (not shown on the map).

Following is a brief description of these resources. For further information about the capabilities of, and restrictions on, the individual OTA components, refer to the Radio Information page at the Powder portal.

15.1 Conducted attenuator matrix resources

A custom-built JFW Industries 50PA-960 attenuator matrix allows for configurable conducted RF paths between a variety of connected components. While the actual RF paths are fixed, and must be manually configured by Powder personnel, users can adjust the attenuation values on those paths dynamically.

The current configuration of paths is:

screenshots/powder/powder-attenuator-config.png

The connected components are:

The N300 and X310 SDRs have their 1 PPS and 10 MHz clock inputs connected to an NI Octoclock-G module to provide a synchronized timing base. Currently the Octoclock is not synchronized to GPS. The B210 SDRs are not connected to the Octoclock.

15.2 Paired Radio workbench resources

The Paired Radio workbench consists of two sets of two NI X310 SDRs each, cross connected via SMA cables with a fixed 30 dB attenuation on each path. Connections are shown in this diagram:

screenshots/powder/powder-paired-config.png

One pair, oai-wb-a1 and oai-wb-a2, has a single NI UBX160 daughter board along with a single 10Gb Ethernet port connected to the Powder wired network. The second pair, oai-wb-b1 and oai-wb-b2, has two NI UBX160 daughter boards and two 10Gb Ethernet ports connected to the Powder wired network.

All four X310 SDRs have their 1 PPS and 10 MHz clock inputs connected to an NI Octoclock-G module to provide a synchronized timing base. Currently the Octoclock is not synchronized to GPS.

15.3 Indoor OTA lab resources

The Indoor OTA environment consists of four NI B210 SDRs with antennas on one side of the lab, and four NI X310 SDRs with antennas on the opposite side of the lab:

screenshots/powder/powder-ota-config.png

The four B210 devices are each connected to a four-port Taoglas MA963 antenna in a 2x2 MIMO configuration, meaning that the TX/RX and RX2 ports on both Channel A and Channel B are connected to an antenna element. The B210 port to antenna port connectivity is as follows: Channel A TX/RX to port 1; Channel A RX2 to port 2; Channel B RX2 to port 3; Channel B TX/RX to port 4.

The four B210 devices are accessed via USB from their associated Intel NUC nodes, ota-nuc1 to ota-nuc4. All NUCs are of the same type:

nuc8559

   

4 nodes (Coffee Lake, 4 cores)

CPU

   

Core i7-8559U processor (4 cores, 2.7Ghz)

RAM

   

32GB Memory (2 x 16GB DDR4 DIMMs)

Disks

   

500GB NVMe SSD Drive

NIC

   

1GbE embedded NIC

Also connected to each of the NUCs is a RM520N COTS UE connected to a four-port Taoglas MA963 antenna. Allocating the associated NUC will provide access to the COTS UE.

On the other side of the room, two of the X310 SDRs, ota-x310-2 and ota-x310-3, also have 2x2 MIMO capability via Taoglas GSA.8841 I-bar antennas connected to each of their Channel 0, and Channel 1 TX/RX and RX2 ports. The other two X310s, ota-x310-1 and ota-x310-4 have two of the I-bar antennas each, connected to the TX/RX and RX2 ports of channel 0 only.

All radios (X310s and B210s) have their 1 PPS and 10 MHz clock inputs connected to Octoclock-G modules to provide a synchronized timing base. The X310 devices connect to one Octoclock, and the B210 devices connect to another, with the two Octoclocks interconnected. Currently the Octoclocks are not synchronized to GPS.

15.4 Rooftop Base-station resources

Powder currently has seven rooftop base stations around the main University of Utah campus, as highlighted on this map:

screenshots/powder/powder-bs-map.png

(For a better sense of scale and location relative to all resources, see the Powder Map.) The siteids (used for node naming) for each base station are (clockwise from top right): Hospital (hospital), Honors (honors), Behavioral (bes), Friendship (fm), Browning (browning), MEB (meb), and USTAR (ustar).

Each Powder base station site includes a climate-controlled enclosure containing radio devices connected to Commscope and Keysight antennas. Each device has one or more dedicated 10G links to aggregation switches at the Fort Douglas datacenter. These network connections can be flexibly paired with compute at the aggregation point or slightly further upstream with Emulab/CloudLab resources.

Allocatable devices at base stations include:
  • One NI X310 SDR with a UBX160 daughter board connected to a Commscope VVSSP-360S-F multi-band antenna. The TX/RX port of the X310 is connected to one of the 3200-3800 MHz ports on the Commscope antenna. The radio can operate in TDD mode, transmit only, or receive only using that port. The X310 has two 10Gb Ethernet ports connected to the Powder wired network. These SDRs are available at all rooftops with the Powder node name: cbrssdr1-siteid, where siteid is one of: bes, browning, fm, honors, hospital, meb, or ustar.

  • One NI X310 SDR with a single UBX160 daughter board connected to a Keysight N6850A broadband omnidirectional antenna. The X310 RX2 port is connected to the wideband Keysight antenna and is receive-only. The X310 has a single 10Gb Ethernet port connected to the Powder wired network. These SDRs are only available at a subset of the base stations with the Powder node names cellsdr1-siteid, where siteid is one of: hospital, bes, or browning. NOTE: The other rooftop sites do not have this connection due to isolation issues with nearby commercial cellular transmitters (too much power that would damage the receiver).

  • Skylark Wireless Massive MIMO Equipment. NOTE: Please refer to the Massive MIMO on POWDER documentation for complete and up-to-date information on using the massive MIMO equipment.

    Briefly, Skylark equipment consists of chains of multiple radios connected through a central hub. This hub is 64x64 capable, and has 4 x 10 Gb Ethernet backhaul connectivity. Powder provides compute and RAM intensive Dell R840 nodes (see compute section below) for pairing with the massive MIMO equipment.

    Massive MIMO deployments are available at the Honors, USTAR, and MEB sites (Powder names: mmimo1-honors, mmimo1-meb, and mmimo1-ustar).

In addition to these user-allocatable resources, each enclosure contains several infrastructure devices of note:
  • A Safran (formerly Seven Solutions) WR-LEN endpoint. Part of a common White Rabbit time distribution network used to synchronize radios at all rooftop and dense base station sites.

  • An NI Octoclock-G module to provide a synchronized timing base to all SDRs. The Octoclock is synchronized with all other base stations and GPS via the White Rabbit network.

  • A low-powered Intel NUC for environmental, connectivity, and RF monitoring and management.

  • A Dell 1Gb network switch for connecting devices inside the enclosure.

  • An fs.com passive CWDM mux for multiplexing multiple 1Gb/10Gb connections over a single fiber-pair backhaul to the Fort datacenter.

  • A managed PDU to allow remote power control of devices inside the enclosure.

15.5 Dense-deployment Base-station resources

Powder currently has five street-side and one rooftop base station located in a dense configuration around one common shuttle route on the main University of Utah campus. The base stations are highlighted in red and the shuttle route shown in green on this map:

screenshots/powder/powder-dbs-map.png

(For a better sense of scale and location relative to all resources, see the Powder Map.) The siteids) (used for node naming) for each base station are (clockwise from upper left): NC Wasatch (wasatch), NC Mario (mario), Moran (moran), Guest House (guesthouse), EBC (ebc), and USTAR (ustar).

Powder dense deployment base station locations contain one or more radio devices connected to a Commscope antenna and a small compute node connected via fiber to the Fort Douglas aggregation point.

Allocatable devices at dense base stations include:
  • NI B210 SDR connected to a Commscope VVSSP-360S-F multi-band antenna. The Channel ’A’ TX/RX port is connected to a CBRS port of the antenna. The B210 is accessed via a USB3 connection to a Neousys Nuvo-7501 ruggedized compute node:

    nuvo7501

       

    1 nodes (Coffee Lake, 4 cores)

    CPU

       

    Intel Core i3-8100T (4 cores, 3.1GHz}

    RAM

       

    32GB wide-temperature range RAM

    Disks

       

    512GB wide-temperature range SATA SSD storage

    NIC

       

    1GbE embedded NIC

    The compute node is connected via two 1Gbps connections to a local switch which in turn uplinks via 10Gb fiber to the Fort datacenter.

    The Powder compute node name is cnode-siteid, where siteid is one of: ebc, guesthouse, mario, moran, ustar, or wasatch.

  • Benetel RU-650.

In addition to these user-allocatable resources, each enclosure contains several infrastructure devices of note:
  • A Safran (formerly Seven Solutions) WR-LEN endpoint. Part of a common White Rabbit time distribution network used to synchronize radios at all rooftop and dense base station sites.

  • Locally designed/built RF frontend/switch.

  • A 1Gb/10Gb network switch for connecting devices inside the enclosure.

  • An fs.com passive CWDM mux for multiplexing multiple 1Gb/10Gb connections over a single fiber-pair backhaul to the Fort datacenter.

  • A managed PDU to allow remote power control of devices inside the enclosure.

15.6 Fixed-endpoint resources

Powder has ten "fixed endpoint" (FE) installations which are enclosures permanently affixed to the sides of buildings at roughly human height (5-6ft). The endpoints are highlighted in red on the following map:

screenshots/powder/powder-fe-map.png

(For a better sense of scale and location relative to all resources, see the Powder Map.) The siteids) (used for node naming) for each FE are (clockwise from top right): Moran (moran), EBC (ebc), Guest House (guesthouse), Sage Point (sagepoint), Madsen (madsen), Central Parking Garage (cpg), Law 73 (law73), Bookstore (bookstore), Humanities (humanities), and WEB (web).

Each Powder FE enclosure contains an ensemble of radio equipment with complementary small form factor compute nodes. Unlike base stations, FEs do not have fiber backhaul connectivity. Instead they use commercial cellular and Campus WiFi to provide seamless access to resources.

There are three allocatable devices at fixed endpoints:
  • One Quectel RM520N COTS UE connected to four Taoglas GSA.8841 I-bar antennas. The UE is USB3 connected to a NUC to provide basic compute capability and a 1Gb Ethernet connection to provide external access. This NUC is known as nuc1 and is the resource allocated to use the radio.

  • One receive-only NI B210 SDR connected to a Taoglas GSA.8841 I-bar antenna via the channel ’A’ RX2 port. The B210 is USB3 connected to the same NUC (nuc1) as the receive-only B210.

  • One transmit and receive NI B210 SDR connected to a Taoglas GSA.8841 I-bar antenna via the channel ’A’ TX/RX port. The B210 is USB3 connected to a NUC to provide basic compute capability and a 1Gb Ethernet connection to provide external access. This NUC is known as nuc2 and is the resource allocated to use the radio.

Both NUC compute nodes are:

nuc8559

   

4 nodes (Coffee Lake, 4 cores)

CPU

   

Core i7-8559U processor (4 cores, 2.7Ghz)

RAM

   

32GB Memory (2 x 16GB DDR4 DIMMs)

Disks

   

250GB NVMe SSD Drive

NIC

   

1GbE embedded NIC

15.7 Mobile-endpoint resources

Powder has twenty "mobile endpoint" (ME) installations which are enclosures mounted on a rear shelf inside of campus shuttle buses.

As with MEs, each Powder FE enclosure contains radio equipment and an associated small form factor compute nodes and they use commercial cellular and Campus WiFi to provide seamless access to resources. However unlike FEs, not all MEs are likely to be available at any time. Only shuttles that are running and on a route are available for allocation. For a map of the bus routes and their proximate Powder resources see the Powder Map.

There are two allocatable devices on each mobile endpoint:
  • One Quectel RM520N COTS UE connected to a four-port Taoglas MA963 antenna. The UE is USB3 connected to a Supermicro small form factor node to provide basic compute capability and a 1Gb Ethernet connection to provide external access. This node is known as ed1 and is the resource allocated to use the radio.

  • One NI B210 SDR connected to a Taoglas GSA.8841 I-bar antenna via the channel ’A’ TX/RX port. The B210 is likewise USB3 connected to the Supermicro node ed1.

The Supermicro Compute Node is:

e300-8d

   

20 nodes (Broadwell, 4 cores)

CPU

   

Intel Xeon processor D-1518 SoC (4-core, 2.2GHz)

RAM

   

64GB Memory (2 x 32GB Hynix 2667MHz DDR4 DIMMs)

Disks

   

480GB Intel SSDSCKKB480G8 SATA SSD Drive

NIC

   

2 x 10Gb Intel SoC Ethernet

15.8 Near-edge computing resources

As previously documented, all base stations are connected via fiber to the Fort Douglas datacenter. The fibers are connected to a collection of CWDM muxes (one per base station) which break out to 1Gb/10Gb connections to a set of three Dell S5248F-ON Ethernet switches. The three switches are interconnected via 2 x 100Gb links and all host 10Gb connections from the rooftop and dense base-stations. One switch also serves as the aggregation switch with 100Gb uplinks to the MEB and DDC datacenters that contain further Powder resources and uplinks to Emulab and CloudLab.

There are 19 servers dedicated to Powder use:

d740

   

16 nodes (Skylake, 24 cores)

CPU

   

2 x Xeon Gold 6126 processors (12 cores, 2.6Ghz)

RAM

   

96GB Memory (12 x 8GB RDIMMs, 2.67MT/s)

Disks

   

2 x 240GB SATA 6Gbps SSD Drives

NIC

   

10GbE Dual port embedded NIC (Intel X710)

NIC

   

10GbE Dual port converged NIC (Intel X710)

d840

   

3 nodes (Skylake, 64 cores)

CPU

   

4 x Xeon Gold 6130 processors (16 cores, 2.1Ghz)

RAM

   

768GB Memory (24 x 32GB RDIMMs, 2.67MT/s)

Disks

   

240GB SATA 6Gbps SSD Drive

Disks

   

4 x 1.6TB NVMe SSD Drive

NIC

   

10GbE Dual port embedded NIC (Intel X710)

NIC

   

40GbE Dual port converged NIC (Intel XL710)

All nodes are connected to two networks:

15.9 Cloud computing resources

In addition, Powder can allocate bare-metal computing resources on any one of several federated clusters, including CloudLab and Emulab. The closest (latency-wise) and most plentiful cloud nodes are the Emulab d430 nodes:

d430

   

160 nodes (Haswell, 16 core, 3 disks)

CPU

   

Two Intel E5-2630v3 8-Core CPUs at 2.4 GHz (Haswell)

RAM

   

64GB ECC Memory (8x 8 GB DDR4 2133MT/s)

Disk

   

One 200 GB 6G SATA SSD

Disk

   

Two 1 TB 7.2K RPM 6G SATA HDDs

NIC

   

Two or four Intel I350 1GbE NICs

NIC

   

Two or four Intel X710 10GbE NICs

which have multiple 10Gb Ethernet interfaces and 80Gbs connectivity to the Powder switch fabric.