Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

PCI device address actually means slot address? And when does PCIe slot get its' address?

I'd like to clarify how the configuration address space in PCI and PCIe works. Namely, a PCI peripheral on a bus is addressed with device:function pair (the bus or domain:bus are not included, since only 1 bus is considered). How is device address determined? What sets it?

According to the answer in this question on superuser and other texts the device address is actually address of the PCI slot, and thus it should be wired in the hardware during manufacturing. Is it correct?

Then, how is this address defined in PCIe case?

From the same answer:

each device has its own individual point-to-point serial connection to its upstream device

-- thus, in PCIe each slot is connected to upstream switch device in star topology. And addresses of the slots are re-assigned by the switch on reset? (So, on power-up the switch calls for devices in each slot and if there is a response the switch assigns device address to the slot?) As in the same answer:

(hence each bridge, including the top-level "root complex", tells each device what its device ID will be)

Or they are also resolved in hardware/firmware of the switch? (And the switch always has a device address assigned to those wires going to a slot.)

It seems for easier hot-plug-ability each slot should have permanent address on the bus (on the "network" of the PCIe switch).

xealits's user avatar

2 Answers 2

This is rather different between PCI and PCI express. However, the bus and device numbers are not explicitly configured. For PCI, the device number is determined by which AD line is tied to the IDSEL input. Hence the device number is determined by the physical slot in which the card is installed, determined by the board layout. See https://superuser.com/questions/1057421/how-is-the-device-determined-in-pci-enumeration-bus-device-function for more information.

For PCIe, what happens is that host software configures the switch ports to route traffic based on the bus and device numbers, then the target device captures the bus and device numbers from the PCIe configuration request TLPs. The switches themselves are dumb - they rely on host software to configure everything, and then route traffic based on the base/limit and bus number register settings. Each link is point-to-point and gets its own bus number. The device at the far end of a link always has device number 0. Inside of PCIe switches there is an emulated PCI bus, and each switch port will have its own device number. So the ports that correspond to each slot can have different device numbers, fixed in silicon by the switch manufacturer, but the devices installed in those slots will all have device number 0.

Since the device number always being zero is a bit of a waste, ARI was created for PCIe. When ARI is enabled, the device number gets rolled into the function number, enabling a single PCIe device to support up to 256 functions. This goes hand in hand with SR-IOV.

For hot-plugging, it is up to software to set aside both address space and bus numbers to support new devices in the hierarchy. PCIe is designed to be an extension of PCI and as a result hot-plug support is poor.

alex.forencich's user avatar

  • \$\begingroup\$ I guess, this explains everything: "Inside of PCIe switches there is an emulated PCI bus, and each switch port will have its own device number. So the ports that correspond to each slot can have different device numbers, fixed in silicon by the switch manufacturer, but the devices installed in those slots will all have device number 0." And when "the host software configures the switch ports to route traffic..." it basically assigns the device numbers to the switch ports, right? \$\endgroup\$ –  xealits Commented Oct 28, 2019 at 11:49
  • \$\begingroup\$ No, the switch port device numbers are generally fixed in silicon and cannot be changed. Software can only set bus numbers indirectly by configuring the bus number registers on the port upstream of the device \$\endgroup\$ –  alex.forencich Commented Oct 29, 2019 at 17:41

The "device ID" is a value that is read from one of the device's registers. It has nothing to do with how the device is connected, and multiple devices can have the same ID.

What that answer calls "device ID" actually is the slot number.

A PCI-E switch pretends to have two levels of PCI buses, one between the upstream device and a bridge for each downstream port, and one with a 1:1 connection for each port. So the physical PCI-E connection is point-to-point, but the virtual bus inside the switch has many devices, and therefore many slot numbers.

Slot numbers describe physical connections, and never change.

CL.'s user avatar

  • 1 \$\begingroup\$ ooops! I didn't mean "device ID/vendor ID" -- going to fix the question now. They call it "device address/device number" -- somehow I averaged it to "device ID". Not sure about official naming in the specification.. \$\endgroup\$ –  xealits Commented Apr 12, 2017 at 15:26

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged pcie addressing or ask your own question .

  • The Overflow Blog
  • The hidden cost of speed
  • The creator of Jenkins discusses CI/CD and balancing business with open source
  • Featured on Meta
  • Announcing a change to the data-dump process
  • Bringing clarity to status tag usage on meta sites

Hot Network Questions

  • How is carousing different from drunkenness in Luke 21:34-36? How should they be interpreted literally and spiritually?
  • How to change upward facing track lights 26 feet above living room?
  • Is my magic enough to keep a person without skin alive for a month?
  • Why didn't Air Force Ones have camouflage?
  • What does 'ex' mean in this context
  • Manhattan distance
  • Can I use Cat 6A to create a USB B 3.0 Superspeed?
  • How can I play MechWarrior 2?
  • Why would autopilot be prohibited below 1000 AGL?
  • An instructor is being added to co-teach a course for questionable reasons, against the course author's wishes—what can be done?
  • What was the first "Star Trek" style teleporter in SF?
  • Best approach to make lasagna fill pan
  • How should I tell my manager that he could delay my retirement with a raise?
  • Could a lawyer agree not to take any further cases against a company?
  • How does this MOSFET-based Schmitt trigger work?
  • Is there a way to read lawyers arguments in various trials?
  • Book about a wormhole found inside the Moon
  • Sylvester primes
  • A seven letter *
  • Gravitational potential energy of a water column
  • How rich is the richest person in a society satisfying the Pareto principle?
  • What is the nature of the relationship between language and thought?
  • Replacing jockey wheels on Shimano Deore rear derailleur
  • What's the benefit or drawback of being Small?

pci bus number assignment

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

How is the device determined in PCI enumeration? (bus/device/function)

I am confused about PCI Bus/Device/Function enumeration. Looking at the Wikipedia page for PCI configuration , I see that for a given bus, the master will request vendor ID and device ID for all devices using function 0. If all 0xFFs are returned, then no device is there, and enumeration moves on. If a valid device ID and vendor ID are found, then there is a PCI unit there and it will be enumerated. I am unsure how the device in the bus.device.function is determined.

For example, let's say I have a CPU with one PCI bus and one PCI peripheral attached to it. I understand that the CPU will look on bus 0 (by default) and will check for all device numbers looking at function 0. How does the peripheral's device number get determined?

simple_symbols's user avatar

In the original PCI framework ("Conventional PCI") and in PCI-X as well, devices corresponded to "slots", each with its own connectors attached to the same parallel bus. Each slot had a unique ID pin that was asserted during enumeration. The enumeration was essentially asking (for each slot): "Hey, is there anything present in this slot?" The device responded by driving data onto the bus in response to this signal. Lack of response meant no device.

A device could also be a "bridge" which meant that it formed a subordinate bus. That bus would have a separate ID (assigned from the upstream), and would have its own set of slots that were enumerated independently.

PCI-Express (PCIe) is totally different. PCIe isn't really a bus -- as in a resource shared among devices; instead each device has its own individual point-to-point serial connection to its upstream device (and to any downstream devices -- and if it has downstream devices, that means it is functioning as a bridge too). Think of PCIe like a LAN. Each bridge is analogous to a switch, that has a bunch of ports connected to other devices. The other devices may be terminal devices, or they might be other switches (i.e. PCIe bridges).

PCIe was designed in such a way that its conceptual framework and addressing (and hence the behavior provided to software) is compatible with PCI and PCI-X. The implementation though is completely different. In enumerating devices, for example, since it's point-to-point, the only question that needs to be determined at each point in the enumeration is "anything there?" Since each device has its own independent set of wires, the device IDs are essentially all hard-coded (hence each bridge, including the top-level "root complex", tells each device what its device ID will be).

In all cases, the "function" part of the bus/device/function is handled strictly within the peripheral. For example, a dual port NIC controller will often have two functions, one for each port. They can be configured and operated independently, but the data path from CPU to function is the same for both.

Gil Hamilton's user avatar

  • 1 The answer is a bit confusing: 1) in PCI "device number" actually means "slot number" (and it makes sense), 2) you say "PCIe is totally different" and "since each device has its own independent set of wires, the device IDs are essentially all hard-coded", which means the set of wires (= the slot) has the ID hard-coded to it, thus it is the same as in PCI. Now, the question is when the "hard-coding" happens? The switches/bridges re-assign the IDs on reset? –  xealits Commented Apr 12, 2017 at 14:23
  • 2 Yeah. That could be worded better. The point is that in PCI, the card is on a shared bus but "knows" what slot it's in and only responds when its slot-specific pin is asserted. In PCIe, the bridge has N different sets of "wires". So the bridge device has a discrete slot number for every set of wires. From the bridge's point of view, that slot has a definite number; it only has to determine whether there's something there. The card itself doesn't know what slot it's in. Once the bridge determines there's something there, it then tells that device what its slot number is. –  Gil Hamilton Commented Apr 12, 2017 at 16:36

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged pci bus ..

  • The Overflow Blog
  • The hidden cost of speed
  • The creator of Jenkins discusses CI/CD and balancing business with open source
  • Featured on Meta
  • Announcing a change to the data-dump process
  • Bringing clarity to status tag usage on meta sites

Hot Network Questions

  • Pull up resistor question
  • Is there a problem known to have no fastest algorithm, up to polynomials?
  • Could a lawyer agree not to take any further cases against a company?
  • Did Babylon 4 actually do anything in the first shadow war?
  • What does "Two rolls" quote really mean?
  • Wien's displacement law
  • Etymology of 制度
  • DateTime.ParseExact returns today if date string and format are set to "General"
  • Euler should be credited with PNT?
  • In which town of Europe (Germany ?) were this 2 photos taken during WWII?
  • Representing permutation groups as equivalence relations
  • Why isn't a confidence level of anything >50% "good enough"?
  • "It never works" vs "It better work"
  • How to Interpret Statistically Non-Significant Estimates and Rule Out Large Effects?
  • How do I learn more about rocketry?
  • Are all citizens of Saudi Arabia "considered Muslims by the state"?
  • Sum[] function not computing the sum
  • A seven letter *
  • Why is it spelled "dummy" and not "dumby?"
  • Does the average income in the US drop by $9,500 if you exclude the ten richest Americans?
  • Sylvester primes
  • Why is notation in logic so different from algebra?
  • Why is LiCl a hypovalent covalent molecule?
  • What's the benefit or drawback of being Small?

pci bus number assignment

In-class: PCI Enumeration

Pci bus overview.

The PCI bus (see wiki.osdev.org/PCI provides a standard architecture for accessing I/O devices on most modern computers, supporting discovery and configuration of attached devices as well as both memory-mapped and IO-mapped (e.g. via inb , outb instructions) access. In particular PCI provides a mechanism where each card or device specifies both its identity and the resources needed (memory-mapped or I/O space, interrupts) and the BIOS and/or OS can discover all devices in the system and allocate interrupts, physical memory space, and I/O space to each device.

PCI has three levels of structure - bus, device, and function. Buses are structured as a tree, with bus 0 as the root and additional buses connected by PCI bridges . An individual PCI card (or chip on the motherboard) that attaches to a PCI bus is a device; note that PCI bridges are themselves devices. Simple devices implement a single function (function 0); more complex devices (e.g. a multi-port ethernet card) may contain multiple functions, which operate (for the most part) like independent PCI devices.

The PCI bus itself is implemented by the motherboard chipset (actually now it's typically integrated onto the CPU itself), which handles tasks such as bus arbitration - ensuring that only one device can transmit on the bus at any time. In parallel PCI the data lines are shared between all devices on a bus, and arbitration is handled by separate bus request lines from each slot to the PCI controller, and corresponding bus grant lines from the controller back to each slot.

The PCI controller is also responsible for providing software access to the configuration space for each device, a 256-byte set of registers which identifies the device and allows configuration of device properties. This is performed via a separate signal to each card or device, so the CPU can detect and configure all devices on the bus without any prior knowledge of e.g. what address they are mapped at.

lspci and examples

Configuration space access.

31 30 - 24 23 - 16 15 - 11 10 - 8 7 - 0
Enable Bit Reserved Bus Number Device Number Function Number Offset

Reading configuration space on XV6

Io port access.

First you'll need access to I/O instructions within a user process. The Intel architecture provides two mechanisms which allow access to in and out instructions while in user space:

  • TSS bitmap - a bitmap can be specified at the end of the TSS which identifies which I/O ports may be accessed from user mode (ring 3).
  • EFLAGS - bits 12 and 13 of the EFLAGS register indicate the priority level necessary to access any I/O port; by default these bits are zero.

The TSS bitmap hasn't been implemented in XV6, and we might not want to extend it far enough to cover ports 0xCF8 and 0xCFC anyway. (that would take over 400 bytes per process) Instead you'll implement a new system call, iopl , which takes a single argument in the range 0-3 and sets the I/O privilege level bits in EFLAGS to that value; your readpci utility can then call iopl(3) and will be able to access I/O ports directly.

Hint - where does EFLAGS come from when you return to user space? Do you actually have to modify the current EFLAGS register in the iopl system call?

Create a user program named readpci (add it to UPROGS in your makefile) and start out with the following code to test iopl:

If iopl was implemented correctly it should run, printing FF, while it will die with a fault otherwise.

PCI register access

To access CONFIG_ADDR and CONFIG_DATA you'll need 32-bit versions of the in/out instructions, which aren't provided in XV6:

Modify readpci to take 3 numeric arguments - bus, device, and function - and read and print the corresponding configuration space in hex. If you want the printout to look nice like this you'll have to modify printf.c to support formats like "%02x" :-)

Configuration space structure

register bits 31-24 bits 23-16 bits 15-8 bits 7-0
00 Device ID Vendor ID
04 Status Command
08 Class code Subclass Prog IF Revision ID
0C BIST Header type Latency Timer Cache Line Size
  • Device ID: Identifies the particular device. Where valid IDs are allocated by the vendor.
  • Vendor ID: Identifies the manufacturer of the device. Where valid IDs are allocated by PCI-SIG to ensure uniqueness and 0xFFFF is an invalid value that will be returned on read accesses to Configuration Space registers of non-existent devices.
  • Status, Command: for handling various error cases
  • Class Code: A read-only register that specifies the type of function the device performs.
  • Subclass: A read-only register that specifies the specific function the device performs.
  • Prog IF: A read-only register that specifies a register-level programming interface the device has, if it has any at all.
  • Revision ID: So a vendor can fix something without allocating a new device ID.
  • BIST: built-in self-test stuff.
  • Header Type: bit 7 (0x80) indicates whether it is a multi-function device, while interesting values of the remaining bits are: 00 = general device, 01 = PCI-to-PCI bridge.
  • Latency Timer, Cache Line Size: very hardware-specific stuff. We'll let the BIOS set it properly and ignore it.

Note that all 16 and 32-bit values are in little-endian order. (not surprisingly, as it was originally developed by Intel) This means that you can access them directly on a PC without needing to translate byte order.

For a general-purpose device the remainder of the configuration space is structured as follows:

register bits 31-24 bits 23-16 bits 15-8 bits 7-0
10 Base address #0 (BAR0)
14 Base address #1 (BAR1)
18 Base address #2 (BAR2)
1C Base address #3 (BAR3)
20 Base address #4 (BAR4)
24 Base address #5 (BAR5)
28 Cardbus CIS Pointer
2C Subsystem ID Subsystem Vendor ID
30 Expansion ROM base address
34 Reserved Capabilities Pointer
38 Reserved
3C Max latency Min Grant Interrupt PIN Interrupt Line
  • CardBus CIS Pointer: obsolete
  • Interrupt Line: PIC IRQ 0-15, or FF (no IRQ)
  • Interrupt Pin: PCI-specific, or for use with IOAPIC.
  • Max Latency: hardware-specific
  • Min Grant: hardware-specific
  • Capabilities Pointer: never mind...

We'll ignore all of these values for now, and talk about them during class.

PCI bridges

In a PCI bridge this portion of the configuration data has the following structure:

register bits 31-24 bits 23-16 bits 15-8 bits 7-0
10 Base address #0 (BAR0)
14 Base address #1 (BAR1)
18 Secondary Latency Timer Subordinate Bus Number Secondary Bus Number Primary Bus Number
1C Secondary Status I/O Limit I/O Base
20 Memory Limit Memory Base
24 Prefetchable Memory Limit Prefetchable Memory Base
28 Prefetchable Base Upper 32 Bits
2C Prefetchable Limit Upper 32 Bits
30 I/O Limit Upper 16 Bits I/O Base Upper 16 Bits
34 Reserved Capability Pointer
38 Expansion ROM base address
3C Bridge Control Interrupt PIN Interrupt Line

The main field of interest is the Secondary Bus Number , which identifies the "child" bus in the tree. For example, since the root PCI bus is always 0, if there is a bus 1 there must be a PCI bridge on bus 0 with secondary bus number 1.

You can recursively enumerate devices on the PCI bus by scanning bus 0, and whenever you detect a PCI bridge recursively scanning its secondary bus. In doing this keep in mind:

  • There can be up to 32 devices on a bus. Each device number corresponds to a slot, so some may be missing in the middle. (e.g. empty PCI slots)
  • There can be up to 8 functions in a multi-function device; however you can stop when you get to the first missing one.
  • The first PCI bridge is going to be the host bridge, with a configuration record indicating that it bridges from bus 0 to bus 0. Don't enumerate it recursively.

You may find the file pci-cfg.h useful, as it contains C structure definitions for PCI device and bridge configurations.

To test recursive enumeration you will need a newer version of QEMU - you can download the latest version for Ubuntu 12.04.3 (i.e. the virtual machine image most of you are using) as follows:

In your xv6 directory you should then be able to run QEMU as follows:

This is the equivalent of running make qemu-nox - if you want the graphical window, eliminate the -nographic argument. It should print several warning messages and then start up, and if you enumerate the PCI bus correctly you'll see devices on buses 1 and 2.

With the following code modification you can compile readpci.c for Linux with the command gcc -DHOST_VERSION readpci.c -o readpci ; you will also need to change all your printf calls from printf(1, "..."); to cprintf("..."); . You should then be able to run your readpci command on any Linux system where you have root access. (e.g. your class virtual machine image)

  • Engineering Mathematics
  • Discrete Mathematics
  • Operating System
  • Computer Networks
  • Digital Logic and Design
  • C Programming
  • Data Structures
  • Theory of Computation
  • Compiler Design
  • Computer Org and Architecture

Peripheral Component Interconnect (PCI)

PCI stands for Peripheral Component Interconnect . 

PCI-Full-Form

It could be a standard information transport that was common in computers from 1993 to 2007 or so. It was for a long time the standard transport for extension cards in computers, like sound cards, network cards, etc. It was a parallel transport, that, in its most common shape, had a clock speed of 66 MHz, and can either be 32 or 64 bits wide. It has since been replaced by PCI Express, which could be a serial transport as contradicted to PCI. A PCI port, or, more precisely, PCI opening, is essentially the connector that’s utilized to put through the card to the transport. When purge, it basically sits there and does nothing. 

Types of PCI:   These are various types of PCI: 

pci bus number assignment

  • PCI 32 bits have a transport speed of 33 MHz and work at 132 MBps.
  • PCI 64 bits have a transport speed of 33 MHz and work at 264 MBps.
  • PCI 32 bits have a transport speed of 66 MHz and work at 512 MBps.
  • PCI 64 bits have a transport speed of 66 MHz and work at 1 GBps.

Function of PCI:   PCI slots are utilized to install sound cards, Ethernet and remote cards and presently strong state drives utilizing NVMe innovation to supply SSD drive speeds that are numerous times speedier than SATA SSD speeds. PCI openings too permit discrete design cards to be included to a computer as well. 

PCI openings (and their variations) permit you to include expansion cards to a motherboard. The extension cards increment the machines capabilities past what the motherboard may create alone, such as: upgraded illustrations, extended sound, expanded USB and difficult drive controller, and extra arrange interface options, to title a couple of. 

Advantages of PCI :

  • You’ll interface a greatest of five components to the PCI and you’ll be able moreover supplant each of them by settled gadgets on the motherboard.
  • You have different PCI buses on the same computer.
  • The PCI transport will improve the speed of the exchanges from 33MHz to 133 MHz with a transfer rate of 1 gigabyte per second.
  • The PCI can handle gadgets employing a greatest of 5 volts and the pins utilized can exchange more than one flag through one stick.

Disadvantages of PCI :

  • PCI Graphics Card cannot get to system memory.
  • PCI does not support pipeline.

author

Please Login to comment...

Similar reads.

  • Top Android Apps for 2024
  • Top Cell Phone Signal Boosters in 2024
  • Best Travel Apps (Paid & Free) in 2024
  • The Best Smart Home Devices for 2024
  • 10 Best Free Reverse Phone Number Lookup

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Linux Device Drivers, 3rd Edition by Jonathan Corbet, Alessandro Rubini, Greg Kroah-Hartman

Get full access to Linux Device Drivers, 3rd Edition and 60K+ other titles, with a free 10-day trial of O'Reilly.

There are also live events, courses curated by job role, and more.

Chapter 12. PCI Drivers

While Chapter 9 introduced the lowest levels of hardware control, this chapter provides an overview of the higher-level bus architectures. A bus is made up of both an electrical interface and a programming interface. In this chapter, we deal with the programming interface.

This chapter covers a number of bus architectures. However, the primary focus is on the kernel functions that access Peripheral Component Interconnect (PCI) peripherals, because these days the PCI bus is the most commonly used peripheral bus on desktops and bigger computers. The bus is the one that is best supported by the kernel. ISA is still common for electronic hobbyists and is described later, although it is pretty much a bare-metal kind of bus, and there isn’t much to say in addition to what is covered in Chapter 9 and Chapter 10 .

The PCI Interface

Although many computer users think of PCI as a way of laying out electrical wires, it is actually a complete set of specifications defining how different parts of a computer should interact.

The PCI specification covers most issues related to computer interfaces. We are not going to cover it all here; in this section, we are mainly concerned with how a PCI driver can find its hardware and gain access to it. The probing techniques discussed in Chapter 12 and Chapter 10 can be used with PCI devices, but the specification offers an alternative that is preferable to probing.

The PCI architecture was designed as a replacement for the ISA standard, with three main goals: to get better performance when transferring data between the computer and its peripherals, to be as platform independent as possible, and to simplify adding and removing peripherals to the system.

The PCI bus achieves better performance by using a higher clock rate than ISA; its clock runs at 25 or 33 MHz (its actual rate being a factor of the system clock), and 66-MHz and even 133-MHz implementations have recently been deployed as well. Moreover, it is equipped with a 32-bit data bus, and a 64-bit extension has been included in the specification. Platform independence is often a goal in the design of a computer bus, and it’s an especially important feature of PCI, because the PC world has always been dominated by processor-specific interface standards. PCI is currently used extensively on IA-32, Alpha, PowerPC, SPARC64, and IA-64 systems, and some other platforms as well.

What is most relevant to the driver writer, however, is PCI’s support for autodetection of interface boards. PCI devices are jumperless (unlike most older peripherals) and are automatically configured at boot time. Then, the device driver must be able to access configuration information in the device in order to complete initialization. This happens without the need to perform any probing.

PCI Addressing

Each PCI peripheral is identified by a bus number, a device number, and a function number. The PCI specification permits a single system to host up to 256 buses, but because 256 buses are not sufficient for many large systems, Linux now supports PCI domains . Each PCI domain can host up to 256 buses. Each bus hosts up to 32 devices, and each device can be a multifunction board (such as an audio device with an accompanying CD-ROM drive) with a maximum of eight functions. Therefore, each function can be identified at hardware level by a 16-bit address, or key. Device drivers written for Linux, though, don’t need to deal with those binary addresses, because they use a specific data structure, called pci_dev , to act on the devices.

Most recent workstations feature at least two PCI buses. Plugging more than one bus in a single system is accomplished by means of bridges , special-purpose PCI peripherals whose task is joining two buses. The overall layout of a PCI system is a tree where each bus is connected to an upper-layer bus, up to bus 0 at the root of the tree. The CardBus PC-card system is also connected to the PCI system via bridges. A typical PCI system is represented in Figure 12-1 , where the various bridges are highlighted.

Layout of a typical PCI system

Figure 12-1. Layout of a typical PCI system

The 16-bit hardware addresses associated with PCI peripherals, although mostly hidden in the struct pci_dev object, are still visible occasionally, especially when lists of devices are being used. One such situation is the output of lspci (part of the pciutils package, available with most distributions) and the layout of information in /proc/pci and /proc/bus/pci . The sysfs representation of PCI devices also shows this addressing scheme, with the addition of the PCI domain information. [ 1 ] When the hardware address is displayed, it can be shown as two values (an 8-bit bus number and an 8-bit device and function number), as three values (bus, device, and function), or as four values (domain, bus, device, and function); all the values are usually displayed in hexadecimal.

For example, /proc/bus/pci/devices uses a single 16-bit field (to ease parsing and sorting), while /proc/bus/ busnumber splits the address into three fields. The following shows how those addresses appear, showing only the beginning of the output lines:

All three lists of devices are sorted in the same order, since lspci uses the /proc files as its source of information. Taking the VGA video controller as an example, 0x00a0 means 0000:00:14.0 when split into domain (16 bits), bus (8 bits), device (5 bits) and function (3 bits).

The hardware circuitry of each peripheral board answers queries pertaining to three address spaces: memory locations, I/O ports, and configuration registers. The first two address spaces are shared by all the devices on the same PCI bus (i.e., when you access a memory location, all the devices on that PCI bus see the bus cycle at the same time). The configuration space, on the other hand, exploits geographical addressing . Configuration queries address only one slot at a time, so they never collide.

As far as the driver is concerned, memory and I/O regions are accessed in the usual ways via inb , readb , and so forth. Configuration transactions, on the other hand, are performed by calling specific kernel functions to access configuration registers. With regard to interrupts, every PCI slot has four interrupt pins, and each device function can use one of them without being concerned about how those pins are routed to the CPU. Such routing is the responsibility of the computer platform and is implemented outside of the PCI bus. Since the PCI specification requires interrupt lines to be shareable, even a processor with a limited number of IRQ lines, such as the x86, can host many PCI interface boards (each with four interrupt pins).

The I/O space in a PCI bus uses a 32-bit address bus (leading to 4 GB of I/O ports), while the memory space can be accessed with either 32-bit or 64-bit addresses. 64-bit addresses are available on more recent platforms. Addresses are supposed to be unique to one device, but software may erroneously configure two devices to the same address, making it impossible to access either one. But this problem never occurs unless a driver is willingly playing with registers it shouldn’t touch. The good news is that every memory and I/O address region offered by the interface board can be remapped by means of configuration transactions. That is, the firmware initializes PCI hardware at system boot, mapping each region to a different address to avoid collisions. [ 2 ] The addresses to which these regions are currently mapped can be read from the configuration space, so the Linux driver can access its devices without probing. After reading the configuration registers, the driver can safely access its hardware.

The PCI configuration space consists of 256 bytes for each device function (except for PCI Express devices, which have 4 KB of configuration space for each function), and the layout of the configuration registers is standardized. Four bytes of the configuration space hold a unique function ID, so the driver can identify its device by looking for the specific ID for that peripheral. [ 3 ] In summary, each device board is geographically addressed to retrieve its configuration registers; the information in those registers can then be used to perform normal I/O access, without the need for further geographic addressing.

It should be clear from this description that the main innovation of the PCI interface standard over ISA is the configuration address space. Therefore, in addition to the usual driver code, a PCI driver needs the ability to access the configuration space, in order to save itself from risky probing tasks.

For the remainder of this chapter, we use the word device to refer to a device function, because each function in a multifunction board acts as an independent entity. When we refer to a device, we mean the tuple “domain number, bus number, device number, and function number.”

To see how PCI works, we start from system boot, since that’s when the devices are configured.

When power is applied to a PCI device, the hardware remains inactive. In other words, the device responds only to configuration transactions. At power on, the device has no memory and no I/O ports mapped in the computer’s address space; every other device-specific feature, such as interrupt reporting, is disabled as well.

Fortunately, every PCI motherboard is equipped with PCI-aware firmware, called the BIOS, NVRAM, or PROM, depending on the platform. The firmware offers access to the device configuration address space by reading and writing registers in the PCI controller.

At system boot, the firmware (or the Linux kernel, if so configured) performs configuration transactions with every PCI peripheral in order to allocate a safe place for each address region it offers. By the time a device driver accesses the device, its memory and I/O regions have already been mapped into the processor’s address space. The driver can change this default assignment, but it never needs to do that.

As suggested, the user can look at the PCI device list and the devices’ configuration registers by reading /proc/bus/pci/devices and /proc/bus/pci/*/* . The former is a text file with (hexadecimal) device information, and the latter are binary files that report a snapshot of the configuration registers of each device, one file per device. The individual PCI device directories in the sysfs tree can be found in /sys/bus/pci/devices . A PCI device directory contains a number of different files:

The file config is a binary file that allows the raw PCI config information to be read from the device (just like the /proc/bus/pci/*/* provides.) The files vendor , device , subsystem_device , subsystem_vendor , and class all refer to the specific values of this PCI device (all PCI devices provide this information.) The file irq shows the current IRQ assigned to this PCI device, and the file resource shows the current memory resources allocated by this device.

Configuration Registers and Initialization

In this section, we look at the configuration registers that PCI devices contain. All PCI devices feature at least a 256-byte address space. The first 64 bytes are standardized, while the rest are device dependent. Figure 12-2 shows the layout of the device-independent configuration space.

The standardized PCI configuration registers

Figure 12-2. The standardized PCI configuration registers

As the figure shows, some of the PCI configuration registers are required and some are optional. Every PCI device must contain meaningful values in the required registers, whereas the contents of the optional registers depend on the actual capabilities of the peripheral. The optional fields are not used unless the contents of the required fields indicate that they are valid. Thus, the required fields assert the board’s capabilities, including whether the other fields are usable.

It’s interesting to note that the PCI registers are always little-endian. Although the standard is designed to be architecture independent, the PCI designers sometimes show a slight bias toward the PC environment. The driver writer should be careful about byte ordering when accessing multibyte configuration registers; code that works on the PC might not work on other platforms. The Linux developers have taken care of the byte-ordering problem (see the next section, Section 12.1.8 ), but the issue must be kept in mind. If you ever need to convert data from host order to PCI order or vice versa, you can resort to the functions defined in <asm/byteorder.h> , introduced in Chapter 11 , knowing that PCI byte order is little-endian.

Describing all the configuration items is beyond the scope of this book. Usually, the technical documentation released with each device describes the supported registers. What we’re interested in is how a driver can look for its device and how it can access the device’s configuration space.

Three or five PCI registers identify a device: vendorID , deviceID , and class are the three that are always used. Every PCI manufacturer assigns proper values to these read-only registers, and the driver can use them to look for the device. Additionally, the fields subsystem vendorID and subsystem deviceID are sometimes set by the vendor to further differentiate similar devices.

Let’s look at these registers in more detail:

This 16-bit register identifies a hardware manufacturer. For instance, every Intel device is marked with the same vendor number, 0x8086 . There is a global registry of such numbers, maintained by the PCI Special Interest Group, and manufacturers must apply to have a unique number assigned to them.

This is another 16-bit register, selected by the manufacturer; no official registration is required for the device ID. This ID is usually paired with the vendor ID to make a unique 32-bit identifier for a hardware device. We use the word signature to refer to the vendor and device ID pair. A device driver usually relies on the signature to identify its device; you can find what value to look for in the hardware manual for the target device.

Every peripheral device belongs to a class . The class register is a 16-bit value whose top 8 bits identify the “base class” (or group ). For example, “ethernet” and “token ring” are two classes belonging to the “network” group, while the “serial” and “parallel” classes belong to the “communication” group. Some drivers can support several similar devices, each of them featuring a different signature but all belonging to the same class; these drivers can rely on the class register to identify their peripherals, as shown later.

These fields can be used for further identification of a device. If the chip is a generic interface chip to a local (onboard) bus, it is often used in several completely different roles, and the driver must identify the actual device it is talking with. The subsystem identifiers are used to this end.

Using these different identifiers, a PCI driver can tell the kernel what kind of devices it supports. The struct pci_device_id structure is used to define a list of the different types of PCI devices that a driver supports. This structure contains the following fields:

These specify the PCI vendor and device IDs of a device. If a driver can handle any vendor or device ID, the value PCI_ANY_ID should be used for these fields.

These specify the PCI subsystem vendor and subsystem device IDs of a device. If a driver can handle any type of subsystem ID, the value PCI_ANY_ID should be used for these fields.

These two values allow the driver to specify that it supports a type of PCI class device. The different classes of PCI devices (a VGA controller is one example) are described in the PCI specification. If a driver can handle any type of subsystem ID, the value PCI_ANY_ID should be used for these fields.

This value is not used to match a device but is used to hold information that the PCI driver can use to differentiate between different devices if it wants to.

There are two helper macros that should be used to initialize a struct pci_device_id structure:

This creates a struct pci_device_id that matches only the specific vendor and device ID. The macro sets the subvendor and subdevice fields of the structure to PCI_ANY_ID .

This creates a struct pci_device_id that matches a specific PCI class.

An example of using these macros to define the type of devices a driver supports can be found in the following kernel files:

These examples create a list of struct pci_device_id structures, with an empty structure set to all zeros as the last value in the list. This array of IDs is used in the struct pci_driver (described below), and it is also used to tell user space which devices this specific driver supports.

MODULE_DEVICE_TABLE

This pci_device_id structure needs to be exported to user space to allow the hotplug and module loading systems know what module works with what hardware devices. The macro MODULE_DEVICE_TABLE accomplishes this. An example is:

This statement creates a local variable called _ _mod_pci_device_table that points to the list of struct pci_device_id . Later in the kernel build process, the depmod program searches all modules for the symbol _ _mod_pci_device_table . If that symbol is found, it pulls the data out of the module and adds it to the file /lib/modules/KERNEL_VERSION/modules.pcimap . After depmod completes, all PCI devices that are supported by modules in the kernel are listed, along with their module names, in that file. When the kernel tells the hotplug system that a new PCI device has been found, the hotplug system uses the modules.pcimap file to find the proper driver to load.

Registering a PCI Driver

The main structure that all PCI drivers must create in order to be registered with the kernel properly is the struct pci_driver structure. This structure consists of a number of function callbacks and variables that describe the PCI driver to the PCI core. Here are the fields in this structure that a PCI driver needs to be aware of:

The name of the driver. It must be unique among all PCI drivers in the kernel and is normally set to the same name as the module name of the driver. It shows up in sysfs under /sys/bus/pci/drivers/ when the driver is in the kernel.

Pointer to the struct pci_device_id table described earlier in this chapter.

Pointer to the probe function in the PCI driver. This function is called by the PCI core when it has a struct pci_dev that it thinks this driver wants to control. A pointer to the struct pci_device_id that the PCI core used to make this decision is also passed to this function. If the PCI driver claims the struct pci_dev that is passed to it, it should initialize the device properly and return 0 . If the driver does not want to claim the device, or an error occurs, it should return a negative error value. More details about this function follow later in this chapter.

Pointer to the function that the PCI core calls when the struct pci_dev is being removed from the system, or when the PCI driver is being unloaded from the kernel. More details about this function follow later in this chapter.

Pointer to the function that the PCI core calls when the struct pci_dev is being suspended. The suspend state is passed in the state variable. This function is optional; a driver does not have to provide it.

Pointer to the function that the PCI core calls when the struct pci_dev is being resumed. It is always called after suspend has been called. This function is optional; a driver does not have to provide it.

In summary, to create a proper struct pci_driver structure, only four fields need to be initialized:

To register the struct pci_driver with the PCI core, a call to pci_register_driver is made with a pointer to the struct pci_driver . This is traditionally done in the module initialization code for the PCI driver:

Note that the pci_register_driver function either returns a negative error number or 0 if everything was registered successfully. It does not return the number of devices that were bound to the driver or an error number if no devices were bound to the driver. This is a change from kernels prior to the 2.6 release and was done because of the following situations:

On systems that support PCI hotplug, or CardBus systems, a PCI device can appear or disappear at any point in time. It is helpful if drivers can be loaded before the device appears, to reduce the time it takes to initialize a device.

The 2.6 kernel allows new PCI IDs to be dynamically allocated to a driver after it has been loaded. This is done through the file new_id that is created in all PCI driver directories in sysfs. This is very useful if a new device is being used that the kernel doesn’t know about just yet. A user can write the PCI ID values to the new_id file, and then the driver binds to the new device. If a driver was not allowed to load until a device was present in the system, this interface would not be able to work.

When the PCI driver is to be unloaded, the struct pci_driver needs to be unregistered from the kernel. This is done with a call to pci_unregister_driver . When this call happens, any PCI devices that were currently bound to this driver are removed, and the remove function for this PCI driver is called before the pci_unregister_driver function returns.

Old-Style PCI Probing

In older kernel versions, the function, pci_register_driver , was not always used by PCI drivers. Instead, they would either walk the list of PCI devices in the system by hand, or they would call a function that could search for a specific PCI device. The ability to walk the list of PCI devices in the system within a driver has been removed from the 2.6 kernel in order to prevent drivers from crashing the kernel if they happened to modify the PCI device lists while a device was being removed at the same time.

If the ability to find a specific PCI device is really needed, the following functions are available:

This function scans the list of PCI devices currently present in the system, and if the input arguments match the specified vendor and device IDs, it increments the reference count on the struct pci_dev variable found, and returns it to the caller. This prevents the structure from disappearing without any notice and ensures that the kernel does not oops. After the driver is done with the struct pci_dev returned by the function, it must call the function pci_dev_put to decrement the usage count properly back to allow the kernel to clean up the device if it is removed.

The from argument is used to get hold of multiple devices with the same signature; the argument should point to the last device that has been found, so that the search can continue instead of restarting from the head of the list. To find the first device, from is specified as NULL . If no (further) device is found, NULL is returned.

An example of how to use this function properly is:

This function can not be called from interrupt context. If it is, a warning is printed out to the system log.

This function works just like pci_get_device , but it allows the subsystem vendor and subsystem device IDs to be specified when looking for the device.

This function searches the list of PCI devices in the system on the specified struct pci_bus for the specified device and function number of the PCI device. If a device is found that matches, its reference count is incremented and a pointer to it is returned. When the caller is finished accessing the struct pci_dev , it must call pci_dev_put .

All of these functions can not be called from interrupt context. If they are, a warning is printed out to the system log.

Enabling the PCI Device

In the probe function for the PCI driver, before the driver can access any device resource (I/O region or interrupt) of the PCI device, the driver must call the pci_enable_device function:

This function actually enables the device. It wakes up the device and in some cases also assigns its interrupt line and I/O regions. This happens, for example, with CardBus devices (which have been made completely equivalent to PCI at the driver level).

Accessing the Configuration Space

After the driver has detected the device, it usually needs to read from or write to the three address spaces: memory, port, and configuration. In particular, accessing the configuration space is vital to the driver, because it is the only way it can find out where the device is mapped in memory and in the I/O space.

Because the microprocessor has no way to access the configuration space directly, the computer vendor has to provide a way to do it. To access configuration space, the CPU must write and read registers in the PCI controller, but the exact implementation is vendor dependent and not relevant to this discussion, because Linux offers a standard interface to access the configuration space.

As far as the driver is concerned, the configuration space can be accessed through 8-bit, 16-bit, or 32-bit data transfers. The relevant functions are prototyped in <linux/pci.h> :

Read one, two, or four bytes from the configuration space of the device identified by dev . The where argument is the byte offset from the beginning of the configuration space. The value fetched from the configuration space is returned through the val pointer, and the return value of the functions is an error code. The word and dword functions convert the value just read from little-endian to the native byte order of the processor, so you need not deal with byte ordering.

Write one, two, or four bytes to the configuration space. The device is identified by dev as usual, and the value being written is passed as val . The word and dword functions convert the value to little-endian before writing to the peripheral device.

All of the previous functions are implemented as inline functions that really call the following functions. Feel free to use these functions instead of the above in case the driver does not have access to a struct pci_dev at any paticular moment in time:

Just like the pci_read_ functions, but struct pci_bus * and devfn variables are needed instead of a struct pci_dev * .

Just like the pci_write_ functions, but struct pci_bus * and devfn variables are needed instead of a struct pci_dev * .

The best way to address the configuration variables using the pci_read_ functions is by means of the symbolic names defined in <linux/pci.h> . For example, the following small function retrieves the revision ID of a device by passing the symbolic name for where to pci_read_config_byte :

Accessing the I/O and Memory Spaces

A PCI device implements up to six I/O address regions. Each region consists of either memory or I/O locations. Most devices implement their I/O registers in memory regions, because it’s generally a saner approach. However, unlike normal memory, I/O registers should not be cached by the CPU because each access can have side effects. The PCI device that implements I/O registers as a memory region marks the difference by setting a “memory-is-prefetchable” bit in its configuration register. [ 4 ] If the memory region is marked as prefetchable, the CPU can cache its contents and do all sorts of optimization with it; nonprefetchable memory access, on the other hand, can’t be optimized because each access can have side effects, just as with I/O ports. Peripherals that map their control registers to a memory address range declare that range as nonprefetchable, whereas something like video memory on PCI boards is prefetchable. In this section, we use the word region to refer to a generic I/O address space that is memory-mapped or port-mapped.

An interface board reports the size and current location of its regions using configuration registers—the six 32-bit registers shown in Figure 12-2 , whose symbolic names are PCI_BASE_ADDRESS_0 through PCI_BASE_ADDRESS_5 . Since the I/O space defined by PCI is a 32-bit address space, it makes sense to use the same configuration interface for memory and I/O. If the device uses a 64-bit address bus, it can declare regions in the 64-bit memory space by using two consecutive PCI_BASE_ADDRESS registers for each region, low bits first. It is possible for one device to offer both 32-bit regions and 64-bit regions.

In the kernel, the I/O regions of PCI devices have been integrated into the generic resource management. For this reason, you don’t need to access the configuration variables in order to know where your device is mapped in memory or I/O space. The preferred interface for getting region information consists of the following functions:

The function returns the first address (memory address or I/O port number) associated with one of the six PCI I/O regions. The region is selected by the integer bar (the base address register), ranging from 0-5 (inclusive).

The function returns the last address that is part of the I/O region number bar . Note that this is the last usable address, not the first address after the region.

This function returns the flags associated with this resource.

Resource flags are used to define some features of the individual resource. For PCI resources associated with PCI I/O regions, the information is extracted from the base address registers, but can come from elsewhere for resources not associated with PCI devices.

All resource flags are defined in <linux/ioport.h> ; the most important are:

If the associated I/O region exists, one and only one of these flags is set.

These flags tell whether a memory region is prefetchable and/or write protected. The latter flag is never set for PCI resources.

By making use of the pci_resource_ functions, a device driver can completely ignore the underlying PCI registers, since the system already used them to structure resource information.

PCI Interrupts

As far as interrupts are concerned, PCI is easy to handle. By the time Linux boots, the computer’s firmware has already assigned a unique interrupt number to the device, and the driver just needs to use it. The interrupt number is stored in configuration register 60 ( PCI_INTERRUPT_LINE ), which is one byte wide. This allows for as many as 256 interrupt lines, but the actual limit depends on the CPU being used. The driver doesn’t need to bother checking the interrupt number, because the value found in PCI_INTERRUPT_LINE is guaranteed to be the right one.

If the device doesn’t support interrupts, register 61 ( PCI_INTERRUPT_PIN ) is 0 ; otherwise, it’s nonzero. However, since the driver knows if its device is interrupt driven or not, it doesn’t usually need to read PCI_INTERRUPT_PIN .

Thus, PCI-specific code for dealing with interrupts just needs to read the configuration byte to obtain the interrupt number that is saved in a local variable, as shown in the following code. Beyond that, the information in Chapter 10 applies.

The rest of this section provides additional information for the curious reader but isn’t needed for writing drivers.

A PCI connector has four interrupt pins, and peripheral boards can use any or all of them. Each pin is individually routed to the motherboard’s interrupt controller, so interrupts can be shared without any electrical problems. The interrupt controller is then responsible for mapping the interrupt wires (pins) to the processor’s hardware; this platform-dependent operation is left to the controller in order to achieve platform independence in the bus itself.

The read-only configuration register located at PCI_INTERRUPT_PIN is used to tell the computer which single pin is actually used. It’s worth remembering that each device board can host up to eight devices; each device uses a single interrupt pin and reports it in its own configuration register. Different devices on the same device board can use different interrupt pins or share the same one.

The PCI_INTERRUPT_LINE register, on the other hand, is read/write. When the computer is booted, the firmware scans its PCI devices and sets the register for each device according to how the interrupt pin is routed for its PCI slot. The value is assigned by the firmware, because only the firmware knows how the motherboard routes the different interrupt pins to the processor. For the device driver, however, the PCI_INTERRUPT_LINE register is read-only. Interestingly, recent versions of the Linux kernel under some circumstances can assign interrupt lines without resorting to the BIOS.

Hardware Abstractions

We complete the discussion of PCI by taking a quick look at how the system handles the plethora of PCI controllers available on the marketplace. This is just an informational section, meant to show the curious reader how the object-oriented layout of the kernel extends down to the lowest levels.

The mechanism used to implement hardware abstraction is the usual structure containing methods. It’s a powerful technique that adds just the minimal overhead of dereferencing a pointer to the normal overhead of a function call. In the case of PCI management, the only hardware-dependent operations are the ones that read and write configuration registers, because everything else in the PCI world is accomplished by directly reading and writing the I/O and memory address spaces, and those are under direct control of the CPU.

Thus, the relevant structure for configuration register access includes only two fields:

The structure is defined in <linux/pci.h> and used by drivers/pci/pci.c , where the actual public functions are defined.

The two functions that act on the PCI configuration space have more overhead than dereferencing a pointer; they use cascading pointers due to the high object-orientedness of the code, but the overhead is not an issue in operations that are performed quite rarely and never in speed-critical paths. The actual implementation of pci_read_config_byte(dev, where, val) , for instance, expands to:

The various PCI buses in the system are detected at system boot, and that’s when the struct pci_bus items are created and associated with their features, including the ops field.

Implementing hardware abstraction via “hardware operations” data structures is typical in the Linux kernel. One important example is the struct alpha_machine_vector data structure. It is defined in <asm-alpha/machvec.h> and takes care of everything that may change across different Alpha-based computers.

A Look Back: ISA

The ISA bus is quite old in design and is a notoriously poor performer, but it still holds a good part of the market for extension devices. If speed is not important and you want to support old motherboards, an ISA implementation is preferable to PCI. An additional advantage of this old standard is that if you are an electronic hobbyist, you can easily build your own ISA devices, something definitely not possible with PCI.

On the other hand, a great disadvantage of ISA is that it’s tightly bound to the PC architecture; the interface bus has all the limitations of the 80286 processor and causes endless pain to system programmers. The other great problem with the ISA design (inherited from the original IBM PC) is the lack of geographical addressing, which has led to many problems and lengthy unplug-rejumper-plug-test cycles to add new devices. It’s interesting to note that even the oldest Apple II computers were already exploiting geographical addressing, and they featured jumperless expansion boards.

Despite its great disadvantages, ISA is still used in several unexpected places. For example, the VR41xx series of MIPS processors used in several palmtops features an ISA-compatible expansion bus, strange as it seems. The reason behind these unexpected uses of ISA is the extreme low cost of some legacy hardware, such as 8390-based Ethernet cards, so a CPU with ISA electrical signaling can easily exploit the awful, but cheap, PC devices.

Hardware Resources

An ISA device can be equipped with I/O ports, memory areas, and interrupt lines.

Even though the x86 processors support 64 KB of I/O port memory (i.e., the processor asserts 16 address lines), some old PC hardware decodes only the lowest 10 address lines. This limits the usable address space to 1024 ports, because any address in the range 1 KB to 64 KB is mistaken for a low address by any device that decodes only the low address lines. Some peripherals circumvent this limitation by mapping only one port into the low kilobyte and using the high address lines to select between different device registers. For example, a device mapped at 0x340 can safely use port 0x740 , 0xB40 , and so on.

If the availability of I/O ports is limited, memory access is still worse. An ISA device can use only the memory range between 640 KB and 1 MB and between 15 MB and 16 MB for I/O register and device control. The 640-KB to 1-MB range is used by the PC BIOS, by VGA-compatible video boards, and by various other devices, leaving little space available for new devices. Memory at 15 MB, on the other hand, is not directly supported by Linux, and hacking the kernel to support it is a waste of programming time nowadays.

The third resource available to ISA device boards is interrupt lines. A limited number of interrupt lines is routed to the ISA bus, and they are shared by all the interface boards. As a result, if devices aren’t properly configured, they can find themselves using the same interrupt lines.

Although the original ISA specification doesn’t allow interrupt sharing across devices, most device boards allow it. [ 5 ] Interrupt sharing at the software level is described in Chapter 10 .

ISA Programming

As far as programming is concerned, there’s no specific aid in the kernel or the BIOS to ease access to ISA devices (like there is, for example, for PCI). The only facilities you can use are the registries of I/O ports and IRQ lines, described in Section 10.2 .

The programming techniques shown throughout the first part of this book apply to ISA devices; the driver can probe for I/O ports, and the interrupt line must be autodetected with one of the techniques shown in Section 10.2.2 .

The helper functions isa_readb and friends have been briefly introduced in Chapter 9 , and there’s nothing more to say about them.

The Plug-and-Play Specification

Some new ISA device boards follow peculiar design rules and require a special initialization sequence intended to simplify installation and configuration of add-on interface boards. The specification for the design of these boards is called plug and play (PnP) and consists of a cumbersome rule set for building and configuring jumperless ISA devices. PnP devices implement relocatable I/O regions; the PC’s BIOS is responsible for the relocation—reminiscent of PCI.

In short, the goal of PnP is to obtain the same flexibility found in PCI devices without changing the underlying electrical interface (the ISA bus). To this end, the specs define a set of device-independent configuration registers and a way to geographically address the interface boards, even though the physical bus doesn’t carry per-board (geographical) wiring—every ISA signal line connects to every available slot.

Geographical addressing works by assigning a small integer, called the card select number (CSN), to each PnP peripheral in the computer. Each PnP device features a unique serial identifier, 64 bits wide, that is hardwired into the peripheral board. CSN assignment uses the unique serial number to identify the PnP devices. But the CSNs can be assigned safely only at boot time, which requires the BIOS to be PnP aware. For this reason, old computers require the user to obtain and insert a specific configuration diskette, even if the device is PnP capable.

Interface boards following the PnP specs are complicated at the hardware level. They are much more elaborate than PCI boards and require complex software. It’s not unusual to have difficulty installing these devices, and even if the installation goes well, you still face the performance constraints and the limited I/O space of the ISA bus. It’s much better to install PCI devices whenever possible and enjoy the new technology instead.

If you are interested in the PnP configuration software, you can browse drivers/net/3c509.c , whose probing function deals with PnP devices. The 2.6 kernel saw a lot of work in the PnP device support area, so a lot of the inflexible interfaces have been cleaned up compared to previous kernel releases.

PC/104 and PC/104+

Currently in the industrial world, two bus architectures are quite fashionable: PC/104 and PC/104+. Both are standard in PC-class single-board computers.

Both standards refer to specific form factors for printed circuit boards, as well as electrical/mechanical specifications for board interconnections. The practical advantage of these buses is that they allow circuit boards to be stacked vertically using a plug-and-socket kind of connector on one side of the device.

The electrical and logical layout of the two buses is identical to ISA (PC/104) and PCI (PC/104+), so software won’t notice any difference between the usual desktop buses and these two.

Other PC Buses

PCI and ISA are the most commonly used peripheral interfaces in the PC world, but they aren’t the only ones. Here’s a summary of the features of other buses found in the PC market.

Micro Channel Architecture (MCA) is an IBM standard used in PS/2 computers and some laptops. At the hardware level, Micro Channel has more features than ISA. It supports multimaster DMA, 32-bit address and data lines, shared interrupt lines, and geographical addressing to access per-board configuration registers. Such registers are called Programmable Option Select (POS), but they don’t have all the features of the PCI registers. Linux support for Micro Channel includes functions that are exported to modules.

A device driver can read the integer value MCA_bus to see if it is running on a Micro Channel computer. If the symbol is a preprocessor macro, the macro MCA_bus_ _is_a_macro is defined as well. If MCA_bus_ _is_a_macro is undefined, then MCA_bus is an integer variable exported to modularized code. Both MCA_BUS and MCA_bus_ _is_a_macro are defined in <asm/processor.h> .

The Extended ISA (EISA) bus is a 32-bit extension to ISA, with a compatible interface connector; ISA device boards can be plugged into an EISA connector. The additional wires are routed under the ISA contacts.

Like PCI and MCA, the EISA bus is designed to host jumperless devices, and it has the same features as MCA: 32-bit address and data lines, multimaster DMA, and shared interrupt lines. EISA devices are configured by software, but they don’t need any particular operating system support. EISA drivers already exist in the Linux kernel for Ethernet devices and SCSI controllers.

An EISA driver checks the value EISA_bus to determine if the host computer carries an EISA bus. Like MCA_bus , EISA_bus is either a macro or a variable, depending on whether EISA_bus_ _is_a_macro is defined. Both symbols are defined in <asm/processor.h> .

The kernel has full EISA support for devices with sysfs and resource management functionality. This is located in the drivers/eisa directory.

Another extension to ISA is the VESA Local Bus (VLB) interface bus, which extends the ISA connectors by adding a third lengthwise slot. A device can just plug into this extra connector (without plugging in the two associated ISA connectors), because the VLB slot duplicates all important signals from the ISA connectors. Such “standalone” VLB peripherals not using the ISA slot are rare, because most devices need to reach the back panel so that their external connectors are available.

The VESA bus is much more limited in its capabilities than the EISA, MCA, and PCI buses and is disappearing from the market. No special kernel support exists for VLB. However, both the Lance Ethernet driver and the IDE disk driver in Linux 2.0 can deal with VLB versions of their devices.

While most computers nowadays are equipped with a PCI or ISA interface bus, most older SPARC-based workstations use SBus to connect their peripherals.

SBus is quite an advanced design, although it has been around for a long time. It is meant to be processor independent (even though only SPARC computers use it) and is optimized for I/O peripheral boards. In other words, you can’t plug additional RAM into SBus slots (RAM expansion boards have long been forgotten even in the ISA world, and PCI does not support them either). This optimization is meant to simplify the design of both hardware devices and system software, at the expense of some additional complexity in the motherboard.

This I/O bias of the bus results in peripherals using virtual addresses to transfer data, thus bypassing the need to allocate a contiguous DMA buffer. The motherboard is responsible for decoding the virtual addresses and mapping them to physical addresses. This requires attaching an MMU (memory management unit) to the bus; the chipset in charge of the task is called IOMMU. Although somehow more complex than using physical addresses on the interface bus, this design is greatly simplified by the fact that SPARC processors have always been designed by keeping the MMU core separate from the CPU core (either physically or at least conceptually). Actually, this design choice is shared by other smart processor designs and is beneficial overall. Another feature of this bus is that device boards exploit massive geographical addressing, so there’s no need to implement an address decoder in every peripheral or to deal with address conflicts.

SBus peripherals use the Forth language in their PROMs to initialize themselves. Forth was chosen because the interpreter is lightweight and, therefore, can be easily implemented in the firmware of any computer system. In addition, the SBus specification outlines the boot process, so that compliant I/O devices fit easily into the system and are recognized at system boot. This was a great step to support multi-platform devices; it’s a completely different world from the PC-centric ISA stuff we were used to. However, it didn’t succeed for a variety of commercial reasons.

Although current kernel versions offer quite full-featured support for SBus devices, the bus is used so little nowadays that it’s not worth covering in detail here. Interested readers can look at source files in arch/sparc/kernel and arch/sparc/mm .

Another interesting, but nearly forgotten, interface bus is NuBus. It is found on older Mac computers (those with the M68k family of CPUs).

All of the bus is memory-mapped (like everything with the M68k), and the devices are only geographically addressed. This is good and typical of Apple, as the much older Apple II already had a similar bus layout. What is bad is that it’s almost impossible to find documentation on NuBus, due to the close-everything policy Apple has always followed with its Mac computers (and unlike the previous Apple II, whose source code and schematics were available at little cost).

The file drivers/nubus/nubus.c includes almost everything we know about this bus, and it’s interesting reading; it shows how much hard reverse engineering developers had to do.

External Buses

One of the most recent entries in the field of interface buses is the whole class of external buses. This includes USB, FireWire, and IEEE1284 (parallel-port-based external bus). These interfaces are somewhat similar to older and not-so-external technology, such as PCMCIA/CardBus and even SCSI.

Conceptually, these buses are neither full-featured interface buses (like PCI is) nor dumb communication channels (like the serial ports are). It’s hard to classify the software that is needed to exploit their features, as it’s usually split into two levels: the driver for the hardware controller (like drivers for PCI SCSI adaptors or PCI controllers introduced in the Section 12.1 ) and the driver for the specific “client” device (like sd.c handles generic SCSI disks and so-called PCI drivers deal with cards plugged in the bus).

Quick Reference

This section summarizes the symbols introduced in the chapter:

Header that includes symbolic names for the PCI registers and several vendor and device ID values.

Structure that represents a PCI device within the kernel.

Structure that represents a PCI driver. All PCI drivers must define this.

Structure that describes the types of PCI devices this driver supports.

Functions that register or unregister a PCI driver from the kernel.

Functions that search the device list for devices with a specific signature or those belonging to a specific class. The return value is NULL if none is found. from is used to continue a search; it must be NULL the first time you call either function, and it must point to the device just found if you are searching for more devices. These functions are not recommended to be used, use the pci_get_ variants instead.

Functions that search the device list for devices with a specific signature or belonging to a specific class. The return value is NULL if none is found. from is used to continue a search; it must be NULL the first time you call either function, and it must point to the device just found if you are searching for more devices. The structure returned has its reference count incremented, and after the caller is finished with it, the function pci_dev_put must be called.

Functions that read or write a PCI configuration register. Although the Linux kernel takes care of byte ordering, the programmer must be careful about byte ordering when assembling multibyte values from individual bytes. The PCI bus is little-endian.

Enables a PCI device.

Functions that handle PCI device resources.

[ 1 ] Some architectures also display the PCI domain information in the /proc/pci and /proc/bus/pci files.

[ 2 ] Actually, that configuration is not restricted to the time the system boots; hotpluggable devices, for example, cannot be available at boot time and appear later instead. The main point here is that the device driver must not change the address of I/O or memory regions.

[ 3 ] You’ll find the ID of any device in its own hardware manual. A list is included in the file pci.ids , part of the pciutils package and the kernel sources; it doesn’t pretend to be complete but just lists the most renowned vendors and devices. The kernel version of this file will not be included in future kernel series.

[ 4 ] The information lives in one of the low-order bits of the base address PCI registers. The bits are defined in <linux/pci.h> .

[ 5 ] The problem with interrupt sharing is a matter of electrical engineering: if a device drives the signal line inactive—by applying a low-impedance voltage level—the interrupt can’t be shared. If, on the other hand, the device uses a pull-up resistor to the inactive logic level, sharing is possible. This is the norm nowadays. However, there’s still a potential risk of losing interrupt events since ISA interrupts are edge triggered instead of level triggered. Edge-triggered interrupts are easier to implement in hardware but don’t lend themselves to safe sharing.

Get Linux Device Drivers, 3rd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.

Don’t leave empty-handed

Get Mark Richards’s Software Architecture Patterns ebook to better understand how to design components—and how they should interact.

It’s yours, free.

Cover of Software Architecture Patterns

Check it out now on O’Reilly

Dive in for free with a 10-day trial of the O’Reilly learning platform—then explore all the other resources our members count on to build skills and solve problems every day.

pci bus number assignment

  • Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
  • Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand
  • OverflowAI GenAI features for Teams
  • OverflowAPI Train & fine-tune LLMs
  • Labs The future of collective knowledge sharing
  • About the company Visit the blog

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Get early access and see previews of new features.

How is PCI segment(domain) related to multiple Host Bridges(or Root Bridges)? [closed]

I'm trying to understand how PCI segment(domain) is related to multiple Host Bridges?

Some people say multiple PCI domains corresponds to multiple Host Bridges, but some say it means multiple Root Bridges under a single Host Bridge. I'm confused and I don't find much useful information in PCI SIG base spec.

(1) Suppose I setup 3 PCI domains in MCFG, do I have 3 Host Bridges that connects 3 CPUs and buses, or do I have 3 Root Bridges that support 3x times buses but all share a common Host Bridge in one CPU?
(2) If I have multiple Host Bridges(or Root Bridges), do these bridges share a common South Bridge(e.g., ICH9), or they have separate ones?

I'm a beginner and google did not solve my problems much. I would appreciate it if someone could give my some clues.

  • cpu-architecture

Zihan's user avatar

2 Answers 2

The wording used is confusing. I'll try to fix my terminology with a brief and incomplete summary of the PCI and PCI Express technology. Skip to the last section to read the answers.

The Conventional PCI bus (henceforward PCI ) is a designed around the bus topology : a shared bus is used to connect all the devices.

To create more complex hierarchies some devices can operate as bridge : a bridge connects a PCI bus to another, secondary, bus. The secondary bus can be another PCI bus (the device is called a PCI-to-PCI bridge , henceforward P2P ) or a bus of a different type (e.g. PCI-to-ISA bridge).

This creates a topology of the form:

Informally, each PCI bus is called a PCI segment . In the picture above, two segments are shown (PCI BUS 0 and PCI BUS 1).

PCI defined three types of transactions: Memory, IO and configuration. The first two are assumed to be required knowledge. The third one is used to access the configuration address space (CAS) of each device; within this CAS it's possible to meta-configure the device. For example, where it is mapped in the system memory address space.

In order to access the CAS of a device, the devices must be addressable. Electrically, each PCI slot (either integrated or not), in a PCI bus segment, is wired to create an addressing scheme made of three parts: device (0-31), function (0-7), register (0-255). Each device can have up to seven logical functions, each one with a CAS of 256 bytes.

A bus number is added to the triple above to uniquely identify a device within the whole bus topology (and not only within the bus segment). This quadruplet is called ID address . It's important to note that these ID addresses are assigned by the software (but for the device part, which is fixed by the wiring). They are logical, however, it is advised to number the busses sequentially from the root.

The CPU doesn't generate PCI transactions natively, a Host Bridge is necessary. It is a bridge (conceptually a Host-to-PCI bridge) that lets the CPU performs PCI transactions. For example, in the x86 case, any memory write or IO write not reclaimed by other agents (e.g. memory, memory mapped CPU components, legacy devices, etc.) is passed to the PCI bus by the Host Bridge. To generate CAS transactions, an x86 CPU writes to the IO ports 0xcf8 and 0xcfc (the first contains the ID address, the second the data to read/write).

A CPU can have more than a Host Bridge, nothing prevents it, though it's very rare. More likely, a system can have more than one CPU and with a Host Bridge integrated into each of them, a system can have more than one Host Bridge.

For PCI, each Host Bridge establishes a PCI domain : a set of bus segments. The main characteristic of a PCI domain is that it is isolated from other PCI domains: a transaction is not required to be routable between domains.

An OS can assign the bus numbers of each PCI domain as it please, it can reuse the bus numbers or can assign them sequentially:

Unfortunately, the word PCI domain has also a meaning in the Linux kernel, it is used to number each Host Bridge. As far as the PCI is concerned this works, but with the introduction of PCI express, this gets confusing because PCI express has its own name for "Host Bridge number" (i.e. PCI segment group) and the term PCI domain denotes the downstream link of the PCI express root port.

PCI Express

The PCI Express bus (henceforward PCIe) is designed around a point-to-point topology: a device is connected only to another device.

To maintain a software compatibility, an extensive use of virtual P2P bridges is made. While the basic components of the PCI bus were devices and bridges, the basic components of the PCIe are devices and switches. From the software perspective, nothing is changed (but for new features added) and the bus is enumerated the same way: with devices and bridges.

The PCIe switch is the basic glue between devices, it has n downstream ports . Internally the switch has a PCI bus segment, for each port a virtual P2P bridge is created in the internal bus segment (the virtual adjective is there because each P2P only responds to the CAS transaction, that's enough for PCI compatible software). Each downstream port is a PCIe link. A PCIe link is regarded as a PCI bus segment; this checks with the fact that the switch has a P2P bridge for each downstream port (in total there are 1 + n PCI bus segment for a switch). A switch has one more port: the upstream port. It is just like a downstream port but it uses a subtractive decoding, just like for a network switch, it is used to receive traffic from the "logical external network" and to route unknown destinations.

Logical Block of a PCIe switch

So a switch takes 1 + N + 1 PCI segment bus.

Devices are connected directly to a switch.

In the PCI case, a bridge connected the CPU to the PCI subsystem, so it's logical to expect a switch to connect the CPU to the PCIe subsystem. This is indeed the case, with the PCI complex root (PCR). The PCR is basically a switch with an important twist: each one of its ports establishes a new PCI domain . This means that it is not required to route traffic from port 1 to port2 (while a switch, of course, is). This creates a shift with the Linux terminology, as mentioned before, because Linux assigns a domain number to each Host Bridges or PCR while, as per specifications, each PCR has multiple domains. Long story short: same word, different meanings. The PCIe specification uses the word PCI segment group to define a numbering per PCR (simply put the PCI segment group is the base address of the extended CAS mechanism of each PCR, so there is a one-to-one mapped natively).

Due to their isolation property, the ports of the PCR are called PCIe Root Port .

The term Root Bridge doesn't exist in the specification, I can only find it in the UEFI Root Bridge IO Specification as an umbrella term for both the Host Bridge and PCR (since they share similar duties).

The Host Bridge also goes under the name of Host Adapter.

Finally your question

If you have 3 PCI domains you either have 3 Host Bridged or 3 PCIe root ports.

If by PCI domains you meant PCI buses, in the sense of PCI bus segments (irrespective of their isolation), then you can have either a single Host Bridge/PCR handling a topology with 3 busses or more than one Host Bridge/PCR handling a combination of the 3 busses. There is no specific requirement in this case, as you can see it's possible to cascade busses with bridges.

If you want the bus to not be isolated (so not to be PCI domains) you need a single Host Bridge or a single PCIe root port. A set of P2P bridges (either real or virtual) will connect the busses together.

The bridged platform had faded out years ago, we now have a System Agent integrated into the CPU that exposes a set of PCIe lanes (typically 20+) and a Platform Controller Hub (PCH), connected to the CPU with a DMI link. The PCH can also be integrated into the same socket as the CPU. The PCH exposes some more lanes that appear to be from the CPU PCR from a software perspective.

Anyway, if you had multiple Host Bridges, they were usually on different sockets but there was typically only a single south bridge for them all. However, this was (and is not) strictly mandatory. The modern Intel C620 PCH can operate in Endpoint Only Mode (EPO) where it is not used as the main PCH (with a firmware and boot responsibilities) but as a set of PCIe-endpoint.

The idea is that a Host Bridge just converts CPU transactions to PCI transactions, where these transactions are routed depends on the bus topology and this is by itself a very creative topic. Where the components of this topology are integrated is another creative task, in the end it's possible to have separate chips dedicated for each Host Bridge or a big single chip shared (or partitioned) among all or even both at the same time!

Margaret Bloom's user avatar

  • Wow, thank you so much for such a wonderful answer! Sorry for the late feedback as I got busy with something else at that time, and I wish I saw this answer earlier. It is more clear now, I think my original purpose is more like PCI segment group defined with _SEG() , which is per host bridge. Just one more little question, the MCFG table allows a maximal of 65536 PCI segment groups, and each group could require at most 256*32*8*4K=256M configuration space, in total 65536*256M=16TB space will be occupied. Do we need to make sure we have enough space before allocating segment group? Thanks! –  Zihan Commented May 2, 2018 at 4:53
  • @biggerfish My understanding is that when a segment group is allocated it needs its 256MiB of reserved addresses. –  Margaret Bloom Commented May 2, 2018 at 8:01
  • this gets confusing because PCI express has its own name for "Host Bridge number" (i.e. PCI segment group) and the term PCI domain denotes the downstream link of PCI express port . Is it PCI express root port instead ? –  joz Commented Sep 4, 2018 at 14:28
  • 1 @GeorgeSovetov I don't know if there's a scheme, I've never had a computer with multiple domains/pci segment groups. You can use lspci under linux if you have one. The OS starts from the bus 0, testing each device. If any is a bridge, it reads the secondary bus number from the bridge CAS (which was set by the firmware). Then repeats the scan on the secondary bus. I don't remember if ACPI has some tables dedicated to PCI enumeration, they are not necessary in theory but may speed up the enumeration process. If they exist, the OS has the choice to use them. –  Margaret Bloom Commented Apr 18, 2019 at 15:07
  • 1 @GeorgeSovetov The bus is programmed by the software, google for "PCI-to-PCI bridge specifications". I'm not sure if the host bridge/PCI root complex has a programmable primary bus number. –  Margaret Bloom Commented Apr 18, 2019 at 18:55

DOMAIN/SEGMENT in Configuration Space addressing (Domain tends to be the Linux term, Segment is the Windows and PCISIG term) is primarily a PLATFORM level construct. DOMAIN and SEGMENT are used in this answer interchangeably. Logically, SEGMENT is the most significant selector (most significant address bits selector) in the DOMAIN:Bus:Device:Function:Offset addressing scheme of the PCI Family Configuration Space addressing mechanism. (PCIe, PCI-X, PCI, and later follow-on software compatible bus interconnects).

In PCI-Express (PCIe) and earlier specifications, DOMAIN does not appear ON THE BUS (or on the link), or in the link transaction packets. Only the BUS, DEVICE, FUNCTION, OFFSET appear in in the transactions or on the bus. However, SEGMENT does have a place in how the LOGICAL software based SEGMENT:Bus,Dev,Func:OFFSET Configuration Space software address is used to actually create a interconnect protocol packet (PCIe) or bus cycle sequence (PCI) that is described as a Configuration Space transaction. In the PCI-Express (PCIe) specification Extended Configuration Space ACCESS METHOD (ECAM), this is partially addressed. The remainder of the coverage is handled by the PCI Firmware Specification 3.2 (which covers the newer PCIe Specification software requirements).

In the modern PCIe, the Configuration Space Access Method is handled by the ECAM mechanism by the Operating System, which abstracts such configuration space access mechanisms (the mechanism of turning a CPU memory space accessing instruction into a Bus/Interconnect Configuration Space transaction). The Operating System software understands SEGMENT/DOMAIN as the highest level (top most) logical selector and address component in the SEGMENT:BUS:DEVICE:FUNC:OFFSET address scheme for the Configuration Space. How the SEGMENT moves from software logical concept to physical hardware instantiation comes in the form of the ECAM translator, or specifically in the existence of multiple ECAM translaters in the platform. The ECAM translator translates between a memory type transaction and a configuration space type transaction. The PCIe specification describes how a SINGLE ECAM translator implementation works to translate particular memory address bits in a targeted translation memory write or read, into a Configuration Space write or read. This works as follows:

What is not covered clearly (or at all) really, is that multiple ECAM can exist. A platform can setup multiple ECAM regions. Why would a platform do this? Because the bus selection bit allowance of ECAM address bits (of 8 bits) is restrictive allowing for only 256 total bus in the system, which on some systems is insufficient.

The PCI Firmware Specification 3.2 (Jan 26, 2015) describes the ECAM from a software logical perspective. The Firmware Specification describes a software memory structure (present in BIOS reserved regions used by the BIOS and the Operating system to communicate) called the MCFG table. The MCFG table describes the one OR MORE instances of ECAM hardware based configuration space cycle generator present in the platform's hardware implementation. These are memory address space transaction (memory writes) to configuration space cycle transaction translation regions. e.g. ECAM hardware implementations are the mechanisms that CPU's instantiate to allow the generation of Configuration Space transactions by software. A platform implementation (usually specified and limited by the CPU/Chipset design, but sometimes also decided by the BIOS design choices) allows for some number of ECAM Configuration Space cycle generators. A platform must support at least one, and then it would have a single SEGMENT. But a platform may support multiple ECAM, and then it has a SEGMENT for each ECAM supported. The MCFG table holds one Configuration Space base address allocation structures PER ECAM that is supported on the platform. A single SEGMENT platform will have only a single entry, a multi-SEGMENT platform will have multiple entries, one per ECAM that is supported. Each entry contains the memory base address (of the ECAM region Configuration Space cycle generator), a logical SEGMENT group corresponding to this ECAM and sub-range of bus numbers, and a sub-range of bus numbers (start and end) that exist in this logical SEGMENT.

A BIOS cannot just decide it wants lots of ECAM and describe multiple ECAM and use regular memory addresses as "base address". The hardware must actually by design instantiate the memory address to Configuration Space cycle generating logic that is anchored to a specific address (sometimes fixed by CPU/chipset design, and sometimes configurable in terms of location by a CPU/chipset specific non-standard location configuration register that the BIOS can program.) In either case, the BIOS describes which ECAM are present, and depending on the design, describes the one way, or the particular manner in which the group of ECAM are configured (or disabled) to describe the active one or more active ECAM on the system. Part of such BIOS configuration include setting up configurable ECAM, choosing how many are enabled if that is configurable, where they will live (at what base addresses), and to configure what chipset devices correspond what ECAM, and to configure which Root Ports correspond and are associated with which ECAM(s). Once this BIOS internal configuration is done, then the BIOS must describe these platform hardware and BIOS decisions to the Operating system. This is done using the PCI Firmware Specification defined MCFG table that is part of the ACPI Specification for BIOS (firmware) to Operating System platform description definition mechanisms.

Using the MCFG table scheme, it is possible to have and describe multiple ECAM, and multiple logical SEGMENT, one per ECAM. It is also possible have a single SEGMENT, which is actually split among multiple ECAM as well. (multiple entries, but all using the same SEGMENT, and then non-overlapping bus numbers.) But the typical use of the MCFG in a multi-segment configuration is to allow for multiple segments, where the bus numbers are duplicated. e.g. each SEGMENT can have up to a full compliment of 256 bus, separate from the up to 256 bus that might exist in another SEGMENT.

Three groups of SOFTWARE are aware of SEGMENT in the Configuration Space Addressing. The BIOS (to create the MCFG table), the Operating System (to read the MCFG table, get the ECAM base addresses, and handle logical to physical address translation software tasks by accessing the correct ECAM, at the correct offset), and the last group is ALL OTHER Bus,Device,Function (BDF)-aware software. All software MUST be SEGMENT,BUS,DEV,FUNC aware, not just Bus,DEV,FUNC aware. Software that assume the SEGMENT is always 0, is BROKEN. If you have ever created such software, you should rush back to your desk and fix it NOW, before anyone sees it, and certainly before it is released in a product! :-)

Platform designs (in hardware ECAM support, and BIOS design) may implement multiple ECAM. This is usually done to circumvent the 256 total bus restriction that comes in using only a sigle ECAM. Because the ECAM is defined by the PCISIG, and because that definition revolves around bus number limits ON THE BUS (in transaction fields), a single ECAM cannot implement a Configuration Space Generator for more than 256 bus. However, platforms CAN and DO instantiate multiple ECAM regions, and have an MCFG table describing multiple SEGMENT with multiple ECAM base addresses, and this allows the platform to have more than 256 total busses. (But only a maximum of 256 bus in each PCIe device tree rooted at a PCIe Root Port, and also only a maximum of 256 for all Root Ports together that share a common ECAM Configuration Space generator. Each ECAM region can describe up to 256 bus in its DOMAIN (or SEGMENT). How the platform system decides to group host root ports (cache coherent domain to PCI/PCIe domain bridges) into SEGMENT is platform specific and arbitrary to the chipset/CPU design and BIOS configuration. Most platforms are fixed and simplistic (often with only a single ECAM), and some are flexible, rigorous, and configurable, allowing for a number of solutions. The type of solution that provides for the maximum amount of PCIe bus numbers to be utilized at the PLATFORM level is to support one ECAM per Root Port. (Few platforms today do this, but ALL SHOULD!) Their are two mechanisms used to describe "how" the platform decided to group devices, endpoints, and switches into SEGMENT. The first is the afore-mentioned MCFG structure, which simply lists the multiple SEGMENTS, their associated ECAM (potentially more than one if the bus ranges in one SEGMENT are split among multiple ECAM), and the base address of each ECAM (or ECAM bus number sub-region). This method by itself is generally sufficient for many enumeration tasks, as this allows the OS to enumerate all the segments, find their ECAM, and then do PCI Bus,Device,Function scan of all the devices in each SEGMENT. However, a second mechanism is also available which augments the MCFG information, the _SEG descriptor in the ACPI namespace. The ACPI specification has a generalized platform description mechanism to describe the relation of known devices in the system in manner that is Operating System independent, but which allows the Operating System to parse the data and digest the platform layout. This mechanism is called the ACPI namespace. Within the namespace, devices are "objects", so PCIe endpoints, PCIe root ports, and PCIe switches that are fixed and included on the ACPI implementing systems motherboard are typically described. In this namespace, objects appear, and have qualifiers or decorators. One such decorator "method" is the _SEG method, which describes which SEGMENT a particular object is located in. In this way, built in devices can be group into a particular ECAM access region, or more commonly particular PCIe ROOT Ports are grouped into SEGMENTS (and associated ECAM access regions, with MCFG described base addresses). Additionally, devices that can be hot-added (and which are not statically present on motherboard) can describe the the SEGMENT they create anew upon hot-plug, or the pre-existing SEGMENT that they join upon hot-plug addition, and the bus numbers in that SEGMENT that they instantiate. This is accomplished in the namespace using the _CBA method decorator on the Root Port objects that are described in such namespaces, and it is used in combination with the _SEG method. The _CBA applies only to top-level "known" hot-plugable elements, such as if two 4 CPU systems could be dynamically "joined" into a single 8 CPU system, and their respective PCIe root port elements also thus "join" the new single system that expands from the PCIe root ports of the original base 4 CPU system, to include the new additional PCIe root ports of the added 4 CPU's.

For PCIe switches that appear in slots or external expansion chassis, the _SEG (or SEGMENT value) is generally inherited from the most senior Root Port already in the platform, at the top level of that PCIe device tree. The SEGMENT that includes that root port, is the segment that all device below that Root Port belong too.

One often reads in older collateral (before the invention of multiple ECAM, and the concept of SEGMENTs in the Configuration Space and associated Operating System software), that PCIe enumeration is done by scanning through Bus numbers, then Device Numbers, then Function numbers. This is INCORRECT, and outdated. In reality, MODERN (CURRENT) BIOS and Operating System actually scan by stepping through the SEGMENT (selecting the ECAM to use), then the Bus number, then the Device number, and finally the Function number (which applies offsets within the particular ECAM) to generate Configuration Cycles on and below a particular PCIe Root Port (e.g. on a particular PCIe device tree.)

Older "PCI" mechanisms (before the Enhanced Configuration Configuration Space was defined for PCIe) are unaware of SEGMENT. The older CF8/CFC configuration space access mechanism of PCI when supported by a hardware platform (which should NO LONGER BE USED BY ANY NEWLY WRITTEN SOFTWARE, ever) generally implement the best practical solution for legacy PCI-only (not PCIe-aware) aware operating systems, that the old mechanism is hard coded to SEGMENT 0 for that mechanism. This means that PCI-only-aware sytems can only access the device in SEGMENT 0. All major operating systems have supported the PCIe ECAM enhanced mechanism for over 10 years now, and any use of the CF8/CFC mechanism by software today is considered out of date, archaic and broken, and should be replaced by modern use of ECAM mechanisms, support for the MCFG table and multiple ECAM at a minimum, and if required by the dictates of the Operating System, supplemented by ACPI Specification namespace _SEG and _CBA attribute object method information ingestion by the Operating System for full dynamic hot-plug situations. Nearly all non-hotplug situations can be handled by MCFG alone if the OS is not using ACPI hotplug methods for other tasks. If the OS uses ACPI hotplug for other device and namespace operations, then support of _SEG and _CBA is usually additionally required, both of the Operating System, and of the BIOS to generate these APCI namespace objects in the manner that describes the device association grouping to SEGMENT and thus to the specific ECAM hardware that supports Configuration Cycle generation for that device, root port, or root complex bridge or device. Modern software uses SEGMENTs, and only broken and incorrect software assumes that all devices and bridges are present in SEGMENT 0. This is not always true now, and is increasingly untrue as time goes by.

SEGMENT grouping tends to follow some logical rationale in the hardware design and the limitation upon that designs configuration. (But not always, sometimes, it is just arbitrary and weird.) For instance, on Intel 8 socket systems (large multiprocessors), the "native" coherence implementation tends to be limited to 4 sockets in most cases. When the vendor builds an 8 socket system, it is usually done by having two groups of 4 sockets each connected by a cluster switch on the coherent interconnect between them. That type of platform might implement two PCIe SEGMENTS, one per each of the 4 socket clusters. But there is no restriction on how the platform might want to make use of multiple SEGMENT and use multiple ECAM. A two socket system COULD implement multiple ECAM, one SEGMENT per CPU, in order to allow up to 256 PCIe bus per CPU, 512 total. If such a platform instead mapped all of the root ports from both of the two CPU into a single SEGMENT (much more common), then the entire platform can only have 256 bus for the whole platform. An optimally designed platform (and more expensive in terms of ECAM hardware resources) would provide an ECAM for each and every Root Port in the system, and one ECAM for each Root Complex device in the platform. A two CPU system with 8 Root Ports, 4 on each CPU, with 4 Root Complex on each CPU, would implement 16 SEGMENT, 8 of which (the Root Port ones) would each support the maximum of 256 bus, and the 8 support Root Complex device trees would instantiate sufficient bus support to map the Root Complex devices and bridges. Thus such a fully composed two socket system would support a maximum of 8*256 + built in Root Complex device required bus, given bus support on the high side of 2048+ bus. Any "real" server designed today should be designed this way. I'm still waiting to see this modern "real" Intel or AMD server, rather than the play toys being put out these days.

Most operating system software need not concern itself with "how" SEGMENT associations are implemented, they simply need to allow for the fact that the logical SEGMENT value IS PART OF THE CONFIGURATION SPACE addressing (DO NOT CODE YOUR STUFF FOR just Bus,Device,Function, assuming SEGMENT=0) it MUST BE coded for (SEGMENT, BUS, DEVICE, FUNCTION) unless you want to be labeled a slacker-type, doing it wrong, short-cut-taking imbecile. Support SEGMENT values! They are important now, and will become increasingly important in the future as even more pressure is put on the bus number space (and platforms run out of bus numbers by being limited to the lowly and restrictive 256 bus present in a single SEGMENT. Single SEGMENT restriction happens because of hardware design, but it also happens because software is not properly written and prepared for multi-SEGMENT platforms. Do not be the person that create bad single-SEGMENT (SEGMENT=0) assuming software. DO NOT BE THAT GUY!) Do not write Operating System software this way. Do not write BIOS software this way, do not write applications that are Bus,Device,Function aware, but are not SEGMENT,BUS,DEVICE,FUNCTION aware.

Platform operating system software (in the kernel in *nix, or in the HAL in Windows) takes the SEGMENT value, and uses that to select WHICH ECAM it will access (e.g. which ECAM base address it will add the BDF/offset memory address offset to). Then it uses the Bus, Device, Function values to index into the higher address bits in that ECAM, and finally it uses the Configuration Space device register offset address to fill in the lower portion of the memory address that is feed into the ECAM Configuration Space transaction generator as its memory address transaction input.

On Intel compatible platforms (Intel & AMD, etc.) the PCI Firmware Specification and the ACPI Specification describes how the BIOS tells the Operating System (Linux, Windows, FreeBSD for example) about where the one ECAM (single SEGMENT) or each of the ECAM (multiple SEGMENT) base addresses are located in the memory address space.

The SEGMENT never appears on the Bus in PCIe or PCI. The in PCIe, the RID (Routing Identifier) encodes only the Bus# and the Device/Func# of the sender. Likewise in configuration cycles, only the Bus# and Device#/Func# of the destination target is encoded in the downstream transaction. And Device/Func# can get modified treatment in PCIe if ARI (Alternative Routing ID) mode is enabled. But the SEGMENT value does not appear (nor in PCI bus sequences). It is essentially a software construct, but has a real hardware instantiation in the form of the platform hardware support (CPU and Chipset) for multiple ECAM (multi-SEGMENT) or only a single ECAM (single-segment). Note that devices in different SEGMENT can in fact still do peer to peer direct communication. Peer to Peer transactions occur using memory addressing (which is a single global shared space among all SEGMENTS, e.g. all segments still share a single unified memory address space, at least after IOMMU translation and transit through the Root Port, which is a prerequisite for potentially being on different SEGMENT). SEGMENT can have colliding and duplicated bus number spaces, but these describe DIFFERENT bus when they are present in DIFFERENT SEGMENT. A multiple ECAM system can in fact implement a single shared bus address spaces as well as independent duplicative bus number space. In practice this is a rarity, as a system would usually use a single ECAM and a single SEGMENT for single bus address space. However, some odd hardware might need to make use of a single-segment, multi-ECAM, shared single bus run split across multiple ECAM, providing for a 256 max limited bus space for some odd reason of design (usually hotplug or dynamic configuration related.)

A theoretical platform that had a) LOTS of built in devices in its "chipset/uncore" component, and two CPU sockets in its design COULD implement a four SEGMENT design if it wished. It could put all the built in CPU devices in their own SEGMENT (one per CPU), using two segments, and then map each of the CPU's PCIe root ports into their own SEGMENT, again a unique SEGMENT per CPU, for a total of 4 SEGMENT. This would allow for 4 * 256 = 1024 bus in the whole system.

A different theoretical platform, with the same two socket count, could map all devices from all CPU (in this case just two) and all built in devices, and all the present root ports into a single ECAM, and thus a single SEGMENT. Such a platform would only have the one ECAM, so it would be limited to a total of 256 bus for the entire system (and as a result, would be much more likely to run out of bus numbers if the platform was loaded up with big complicated multi-endpoint add-in cards, with PCIe switches to support the multi-device present. e.g. a fronted AI supporting GPU card, or if it had multiple fan-out switches (to increase its slot count), or if external PCIe switch extenders (to outside PCIe enclosures) were used and supported properly by that vendor.

The "best" platforms being designed for now, and the future will implement an independent ECAM for each and every Root Port, allowing for each Root Port to support up to 256 bus in its device tree alone, independent of every other Root Port. To date, I am still waiting for this platform, one that would have lots and lots of PCIe SEGMENTs corresponding to lots of PCIe Root Ports. Such designs are critical for Compose-able I/O solutions, for shared memory solutions, for tiered memory solutions, for compose-able storage solutions, for large external I/O enclosure solutions, for CXL enabled accelerators, etc. In other words, for modern computing. When a platform comes out and says it has 5% more clock than the last one, I yawn. When one comes out that support per Root Port ECAM, I will take notice, and that platform will get my gold seal.

The pressure on the bus number space is at the breaking point now, so the use of more segments in the immediate future is likely (or should be if Intel and AMD are paying attention). Technologies like CXL (a PCIe software model compatible bus infrastructure) will only increase this pressure on the Configuration Space bus number limitations of a single SEGMENT (256 bus is not a lot these days.) Every switch uses a bus internally, and every link uses a bus, and thus a large slot count, high fanout system WILL consume more than 256 bus. Multi-SEGMENT designs are here now, and will be increasingly common. FIX YOUR SOFTWARE NOW!

PCI Express Base Spec 5.0 (or any earlier version) 7.2. PCI Express Enhanced Configuration Access Mechanism (ECAM) http://www.pcisig.com

PCI Firmware Specification 3.2 (or newer) http://www.pcisig.com 4.1.2 MCFG Table Description

ACPI Specification "MCFG" definition http://uefi.org _SEG (segment) namespace qualifier in the ACPI namespace description.

UEFI Specification http://uefi.org describes how the OS finds the MCFG table and all other ACPI based tables in modern UEFI platforms with UEFI BIOS (boot firmware).

MesaGuy's user avatar

  • You sound very excited. 'Segment' actually means a bus. It's segment group. Although, apparently the Intel manuals are using segment now to mean segment group. Anyway, segment groups are just a consequence of configuring particular 256MiB MMCFG CBo SAD decoder entries to a particular socket Node ID. An access to a region causes the CBo to spawn a QPI message to the programmed socket Node ID. The Node ID is what differentiates between 2 identical bus numbers. The target socket's Ubox (essentially root complex) will pick it up and strip away the node ID. It now has all 256 busses to use. –  Lewis Kelsey Commented Feb 24, 2021 at 3:55
  • Officially, the socket's segment group can be specified in the cpubusno register by software, but it doesn't do anything other than tell software. This is also relayed in the ACPI tables. codywu2010.wordpress.com/2015/11/29/… setting the bus to 64 like this for the 2nd socket appears to be non-standard and you can set it to 0 –  Lewis Kelsey Commented Feb 24, 2021 at 11:01
  • Okay you convinced me for my PCI device tree enumeration in the OS to check for more than MCFG ECAM Segment 0. –  Astralux Commented Apr 15, 2021 at 10:55

Not the answer you're looking for? Browse other questions tagged cpu cpu-architecture pci pci-e pci-bus or ask your own question .

  • The Overflow Blog
  • The hidden cost of speed
  • The creator of Jenkins discusses CI/CD and balancing business with open source
  • Featured on Meta
  • Announcing a change to the data-dump process
  • Bringing clarity to status tag usage on meta sites
  • What does a new user need in a homepage experience on Stack Overflow?
  • Feedback requested: How do you use tag hover descriptions for curating and do...
  • Staging Ground Reviewer Motivation

Hot Network Questions

  • Why does this theta function value yield such a good Riemann sum approximation?
  • Is it a good idea to perform I2C Communication in the ISR?
  • how did the Apollo 11 know its precise gyroscopic position?
  • Can reinforcement learning rewards be a combination of current and new state?
  • Representing permutation groups as equivalence relations
  • Conjugate elements in an extension
  • What should I consider when hiring a graphic designer to digitize my scientific plots?
  • Can I use Cat 6A to create a USB B 3.0 Superspeed?
  • Manhattan distance
  • Are old cardano node versions invalidated
  • Why is it spelled "dummy" and not "dumby?"
  • Pull up resistor question
  • How do you tip cash when you don't have proper denomination or no cash at all?
  • Largest number possible with +, -, ÷
  • Nausea during high altitude cycling climbs
  • An instructor is being added to co-teach a course for questionable reasons, against the course author's wishes—what can be done?
  • What other crewed spacecraft returned uncrewed before Starliner Boe-CFT?
  • Why is notation in logic so different from algebra?
  • Could a lawyer agree not to take any further cases against a company?
  • In which town of Europe (Germany ?) were this 2 photos taken during WWII?
  • What are the most commonly used markdown tags when doing online role playing chats?
  • What was the typical amount of disk storage for a mainframe installation in the 1980s?
  • How do I learn more about rocketry?
  • Current in a circuit is 50% lower than predicted by Kirchhoff's law

pci bus number assignment

Bus:Device.Function (BDF) Notation

Simple bdf notation.

BDF stands for the Bus:Device.Function notation used to succinctly describe PCI and PCIe devices. The simple form of the notation is:

  • PCI Bus number in hexadecimal, often padded using a leading zeros to two or four digits
  • A colon (:)
  • PCI Device number in hexadecimal, often padded using a leading zero to two digits . Sometimes this is also referred to as the slot number.
  • A decimal point (.)
  • PCI Function number in hexadecimal

For example, the following describes Bus 0, Device 2, Function 0:

BDF Notation Extension for PCI Domain

There is an extended form of the BDF notation, which confusingly is also typically referred to as BDF. Fortunately both variants are typically interchangeable.

The extension of the notation is to prefix the simple notation with:

  • PCI Domain number, often padded using leading zeros to four digits

For example, the following describes Domain 0, Bus 0, Device 2, Function 0:

Confusingly, PCI domains do not correspond to Xen domains . PCI domains are a physical property of the host, as are PCI buses. In particular note that the output of "xl pci-list <domain>" will provide output which refers PCI domains, but the argument expected is a xen domain. See VTdHowTo for more information on the use of "xl pci-list <domain>"

Extension Virtual-Slots and Options

Xen further extends BDF notation to allow a virtual-slot and options to be supplied. This notation is used when specifying boot-time attachment with the configuration file of a domain. A side-effect of the work on multi-function PCI pass-through is that this notation is also now accepted for PCI hot-plug as well.

This notation is formed by optionally appending the following to extended BDF:

  • An at-sign (@)
  • A virtual slot number in hexadecimal, which may be padded using leading zeros to two digits

Or number of:

  • A comma (,)
  • An option name. Currently the only valid option names are msitranslate and power_mgmt.
  • An equals-sign (=)
  • A value for the option. Currently the only valid values are 0 or 1, and yes or no

In the case where both a virtual-slot and options are specified, the virtual-slot must come first. For example, the following denotes that physical function 0000:00:02.0 should be passed-through into virtual-slot 1c and that the resulting pass-through device will have the msitranslate option enabled.

BDF Notation Extension for Multi-Function PCI Pass-Through

BDF format is further extended to expand the function field to accept a comma-delimited list of:

  • Function unit
  • The first function unit
  • A hyphen (-)
  • The last function unit
  • An asterisk (*), used to denote all physical functions that are present in the device

Where a function unit comprises of:

  • Function number and optionally;
  • An equals sign and;
  • A virtual function number to use

None of the characters used in this expanded syntax for a function are valid in option names, so parsing can be done unambiguously.

This notation is internally expanded into groups of functions. For example:

Multi-Function Notation Physical
>0000:00:1d.0-2 >0000:00:1d.0
0000:00:1d.1
0000:00:1d.2
>0000:00:1d.0,3,5,7 >0000:00:1d.0
0000:00:1d.3
0000:00:1d.5
0000:00:1d.7
>0000:00:1d.* >0000:00:1d.0
0000:00:1d.1
0000:00:1d.2
0000:00:1d.3
0000:00:1d.5
0000:00:1d.7

The examples above assume that virtual slot 7 will be used. The last example is for a physical device that has functions 0, 1, 2, 3, 5 and 7.

As shown above the physical functions are identity mapped to virtual functions. The exceptions are:

  • If physical function zero isn't present then the numerically lowest physical function is mapped to virtual function zero. This is because PCI devices must always include function zero.
  • Explicit assignment of virtual functions.

Examples that explicitly assign virtual functions:

Multi-Function Notation Physical
>0000:00:1d.2=0-0=2 >0000:00:1d.2
0000:00:1d.1
0000:00:1d.0
>0000:00:1d.0=3,3=2,5=1,7=0 >0000:00:1d.7
0000:00:1d.5
0000:00:1d.3
0000:00:1d.0

Again, the examples above assume that virtual slot 7 will be used. The asterisk (*) notation can't be used in conjunction with explicitly setting the virtual function number.

A virtual device that has explicit virtual functions assigned in such a way that the virtual device doesn't include virtual function zero is invalid.

A limitation of this scheme is that it does not allow functions from different physical devices to be combined to form a multi-function virtual device. However it is of the concern that such combinations would be invalid due to hardware limitations. This needs further investigation.

Navigation menu

  • View source

Personal tools

  • Set up Search

NAVIGATION BY INDEX

  • Index Guide
  • All Categories
  • Recent Changes

NAVIGATION BY AUDIENCE

Hypervisor & tools.

  • Xen Project Developers
  • Windows PV Drivers Developers
  • XAPI Developers

EMBEDDED/AUTOMOTIVE

  • Embedded and Automotive
  • Unikraft Developers
  • MirageOS Developers

NAVIGATION BY DOC TYPE

  • Compatibility
  • Index Pages

NAVIGATION BY TECHNOLOGY

  • Xen Project Hypervisor
  • PVOPS (in Linux Kernel)

INTERACTION

  • Wiki community
  • Manage Wiki
  • Recent changes
  • What links here
  • Related changes
  • Special pages
  • Printable version
  • Permanent link
  • Page information

 width=

  • This page was last edited on 22 February 2014, at 21:00.

Xen.org's servers are hosted with , monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.

  • Privacy policy
  • Disclaimers

Russian cities and regions guide main page

  • Visit Our Blog about Russia to know more about Russian sights, history
  • Check out our Russian cities and regions guides
  • Follow us on Twitter and Facebook to better understand Russia
  • Info about getting Russian visa , the main airports , how to rent an apartment
  • Our Expert answers your questions about Russia, some tips about sending flowers

Russia panorama

Russian regions

  • Altay republic
  • Irkutsk oblast
  • Kemerovo oblast
  • Khakassia republic
  • Krasnoyarsk krai
  • Novosibirsk oblast
  • Omsk oblast
  • Tomsk oblast
  • Tuva republic
  • Map of Russia
  • All cities and regions
  • Blog about Russia
  • News from Russia
  • How to get a visa
  • Flights to Russia
  • Russian hotels
  • Renting apartments
  • Russian currency
  • FIFA World Cup 2018
  • Submit an article
  • Flowers to Russia
  • Ask our Expert

Tomsk city, Russia

The capital city of Tomsk oblast .

Tomsk - Overview

Tomsk is a city in Russia located in the east of Western Siberia on the banks of the Tom River, the administrative center of Tomsk Oblast.

The population of Tomsk is about 570,800 (2022), the area - 295 sq. km.

The phone code - +7 3822, the postal codes - 634000-634538.

Tomsk city flag

Tomsk city coat of arms.

Tomsk city coat of arms

Tomsk city map, Russia

Tomsk city latest news and posts from our blog:.

10 November, 2019 / Tomsk - the view from above .

History of Tomsk

Foundation of tomsk.

According to a large number of archaeological finds, people lived on the territory of today’s Tomsk long before its foundation. At the end of the 16th century, by the time the Russians began to actively explore this region, Siberian Tatars and nomadic peoples at war with them lived here.

In January 1604, a delegation headed by Toyan, the prince of the Eushta Tatars, came to Moscow to the Russian Tsar Boris Godunov with a request to accept them into Russia and to protect them from the attacks of warlike neighbors - the Yenisei Kyrgyz and Kalmyks. In response, Boris Godunov signed a charter on the construction of a town on the lands of the Eushta people on the banks of the Tom River.

In June 1604, the fortress of Tomsk was founded on the southern promontory of Voskresenskaya Mountain towering over the right bank of the Tom. Therefore, the City Day of Tomsk is celebrated on June 7th. In the fall of 1604, all construction work was completed. Tomsk became an important strategic military center. Throughout the 17th century, it protected the local population - in 1614, 1617, 1657, and 1698, the fortress repelled the raids of nomads. In 1635, the population of Tomsk was about 2 thousand people.

More Historical Facts…

Tomsk in the 18th - early 20th centuries

In the 18th century, the borders of Russia moved far to the south and east, the raids of nomads stopped, and Tomsk lost its defensive significance. In 1723, about 8.5 thousand people lived in the town. From the middle of the 18th century, due to its remoteness from the European part of the country, it was used as a place of exile. After the creation of the Siberian Route, which ran from Moscow through Tomsk, the town became an important transit trade center.

In 1804, Tomsk became the administrative center of the huge Tomsk Governorate, which included the territories of the present Republic of Altai, Altai Krai, Kemerovo, Novosibirsk, and Tomsk Oblasts, East Kazakhstan Oblast (Kazakhstan), western parts of Khakassia, and Krasnoyarsk Krai. It also became the cultural and economic center of the south of Western Siberia.

From the late 1830s to the middle of the century, the population of Tomsk grew rapidly thanks to the increasing gold mining in Siberia. In 1888, Tomsk University was opened - the first university in Siberia. At the end of the 19th century, during the construction of the Trans-Siberian Railway, it was decided that it should go much south of Tomsk. As a result, it lost its importance as a transport hub.

By the beginning of the 20th century, over 60 thousand people lived in Tomsk. The city had electric lighting, trams, and a telephone network. By 1914, Tomsk, with a population of 114 thousand people, was among the 25 largest cities of the Russian Empire and ranked first in terms of trade turnover in Siberia.

Tomsk after 1917

After the revolutionary events of 1917, Tomsk became a center for the opposition to the Bolshevik forces in Siberia. Until the end of 1919, the city served as a place for the formation and training of units of the White Army.

The period from 1918 to 1940 was a time of relative decline in Tomsk. There was a significant outflow of the population to the fast-growing Novosibirsk and other cities located on the Trans-Siberian Railway, because Tomsk lost the status of the administrative center of the region. In 1925, Tomsk became part of Siberian Krai. In 1930, it was transformed into West Siberian Krai. In 1937, Tomsk became a city of Novosibirsk Oblast.

During the Second World War, about 30 enterprises from the European part of the USSR were evacuated to Tomsk, which became the basis of the city’s industry. During the war years, the volume of industrial production in Tomsk tripled. From 1940 to 1944, the number of residents increased from 145 to 178 thousand people. On August 13, 1944, Tomsk Oblast was formed, and Tomsk became its administrative center.

In the post-war years, new industries appeared in Tomsk - optical-mechanical, electrical, mechanical rubber. Metalworking and mechanical engineering, food and light industries grew significantly. The development of the city and the region was also largely connected with the beginning of the industrial development of oil and natural gas fields.

In 1970, Tomsk, which had a large number of preserved monuments of wooden architecture of the 19th century, was given the status of a historical city. In 1989, the population of Tomsk exceeded half a million people.

In the 1990s, in Tomsk, as in most cities in Russia, there was a decline in industrial production, especially in mechanical engineering focused on military government orders. In 2004, Tomsk celebrated its 400th anniversary.

Pictures of Tomsk

Tomsk entrance sign

Tomsk entrance sign

Author: Tsigankov Konstantin

On the street in the historical center of Tomsk

On the street in the historical center of Tomsk

Author: Vladimir Kharitonov

In the residential area of Tomsk

In the residential area of Tomsk

Author: Dmitry Afonin

Tomsk - Features

Tomsk is located in the very heart of Siberia, about 3.6 thousand kilometers east of Moscow, on the border of the West Siberian Plain and the spurs of the Kuznetsk Alatau on the right bank of the Tom River, 50 km from the place of its confluence with the Ob River. The city is located on the edge of a taiga natural zone.

It was named after the Tom River on which it was founded. The researchers of the 18th century derived the hydronym “Tom” from the Ket word “tom” meaning “river”. The City Day of Tomsk is celebrated on June 7.

The climate in Tomsk is continental-cyclonic (transitional from European temperate continental to Siberian sharply continental). Winter is harsh and long, the average temperature in January is minus 17.1 degrees Celsius, in July - plus 18.7 degrees Celsius.

The international airport Tomsk (Bogashevo) named after Nikolai Kamov offers regular flights to Moscow, St. Petersburg, Yekaterinburg, Surgut, Krasnoyarsk, Barnaul, Ulan-Ude, Ufa, Nizhnevartovsk, and a number of other Russian cities.

The current coat of arms of Tomsk is based on the coat of arms adopted in 1785. The silver horse was placed on the coat of arms as a sign that “the horses of this area are the best and the Tatars living nearby have stud farms”. The silver horse remains the symbol of Tomsk to this day.

Tomsk is the oldest educational and scientific center in Siberia. Today, students make up one fifth of the population of Tomsk - more than 117 thousand people.

Wooden architecture of Siberia is a bright page in the history of Russian architecture. In Tomsk, wooden architecture is original and expressive. It is here that whole groups of wooden buildings of the late 19th - early 20th centuries have been preserved. You will need at least three days to explore the large number of local attractions.

Main Attractions of Tomsk

Voskresenskaya Gora (Mountain) - the place where Tomsk was founded. Here you can see such sights of Tomsk as Beloye (White) Lake, Voskresenskaya (Resurrection) Church built in the rare Siberian Baroque style in 1789-1807, the Makushin House of Science - an architectural monument of the early 20th century, which houses the puppet theater “Skomorokh”, the Polish Church (1833). The best view of the surroundings opens from the Museum of the History of Tomsk.

Museum of the History of Tomsk . The building of this museum stands out for its unordinary architecture - a stone building crowned by a wooden observation tower, which you can climb and see Tomsk from above. Here you can find exhibitions about peasant and merchant life, a collection of porcelain, and other interesting historical and archaeological exhibits. One of the most interesting exhibits is a wooden monument to the Russian ruble - a copy of a 1 ruble coin, but 100 times larger than the original. Bakunin Street, 3.

Lagernyy Sad (Camps Garden) - a park with an area of about 40 hectares located on the right bank of the Tom River. Several thousand years ago, ancient settlements were located on this very place. The park got its name due to the fact that the summer camps of the Tomsk infantry regiment were located here in the 18th-19th centuries. Today, it is a huge green area with a large population of animals and birds.

Novo-Sobornaya Square - the central square of Tomsk. The architectural appearance of this square began to take shape in the 1840s. In 2003, the square was decorated with a fountain, in 2004 - a monument to the students of Tomsk, and in 2006— the Victory Alley memorial complex.

Tomsk Regional Museum of Local Lore - the largest museum in Tomsk Oblast with more than 140 thousand exhibits. The museum occupies an Empire style mansion of the 19th century and is dedicated to the history and culture of the Tomsk region. Among the most interesting collections are bronze items of the 5th-2nd centuries BC, old handwritten books, Russian silver coins, ceramics, furniture, personal funds of major researchers and architects. A tour of the museum can take several hours. Lenin Avenue, 75.

Tomsk Regional Art Museum . It is housed in a magnificent red brick and sandstone mansion built in 1903. This museum has an excellent collection of paintings, graphics, sculpture, arts and crafts, and icons. The exhibition includes canvases created by European painters of the 16th-21st centuries, Russian and Soviet artists of the 18th-21st centuries. Nakhanovich Lane, 3.

Architecture of Tomsk

Beautiful wooden buildings of Tomsk

Beautiful wooden buildings of Tomsk

Author: S. Shugarov

Wooden Lutheran Church of St. Mary in Tomsk

Wooden Lutheran Church of St. Mary in Tomsk

Church of the Resurrection in Tomsk

Church of the Resurrection in Tomsk

Museum of Wooden Architecture . The exposition of this museum is devoted to the main periods in the history of Tomsk wooden architecture. The building of the museum is an architectural monument of federal significance. The main exhibits are wooden fragments of houses (window frames, cornices, pilasters, examples of carved decor). Dozens of contemporary craftsmen showcase their talents in artistic woodworking in a separate hall. Kirov Street, 7.

The First Museum of Slavic Mythology . This museum offers a look at the origins of the Slavic religion - or rather, what was before the arrival of Orthodoxy. The museum collection is dedicated to Slavic epics, folk tales, and their heroes. Zagornaya Street, 12.

“The NKVD Investigative Prison” - a memorial museum located in the basement of the former NKVD prison. It is dedicated to the memory of people who suffered from repression during the Soviet era. The complex consists of the Square of Memory and the exhibition itself. The permanent exhibition is housed in a makeshift prison hall, cells, and the investigator’s office. The collection consists of documentary materials, photographs, handicrafts of prisoners, and their personal belongings. Lenin Avenue, 44.

Monument to Anton Chekhov - an unusual sculpture standing on the embankment of the Tom River opposite the restaurant “Slavyansky Bazar” (the oldest restaurant and one of the oldest buildings in Tomsk, Lenin Square, 10). The monument was created by sculptor L.A. Usov with voluntary donations. The master embodied the image of Chekhov “through the eyes of a drunken man lying in a ditch” according to the inscription on the pedestal. In 1890, during his visit to Sakhalin, Chekhov stayed in Tomsk for a week and found this city boring and not worthy of attention.

Monument to Happiness - one of the most original monuments of Tomsk. It is a bronze figure of a full, extremely pleased, and impudent wolf from the great Soviet cartoon “Once upon a time there was a dog”. Shevchenko Street, 19/1.

Epiphany Cathedral (1777-1784) - one of the oldest churches in Tomsk. This magnificent building constructed in the Siberian Baroque style is located in the very heart of Tomsk. Lenin Square, 7.

White Mosque (1914) - a majestic building constructed in the neo-Moorish style with stone carvings, lancet windows, and doors. Lugovoy Lane, 18.

Siberian Botanical Garden . The garden covers a huge area, more than 120 hectares. There are almost 8 thousand species of plants here including tropical and subtropical. Most of the trees, shrubs, and flowers can be found outdoors. Its grandiose greenhouse is one of the largest and highest in the world. Lenin Avenue, 34/1.

Picturesque architectural monuments of Tomsk:

  • “House with Firebirds” (1890) - a fine example of wooden architecture built by the merchant Zhelyabo as a wedding gift to his daughter, an architectural monument of federal significance (Krasnoarmeyskaya st., 67/1),
  • “House with Dragons” - one of the symbols of Tomsk with 7 bizarre carved dragons (Krasnoarmeyskaya Street, 68),
  • “House with a Hipped Roof”, also known as the Tomsk Regional Russian-German House - an elegant mansion built at the beginning of the 20th century for the wealthy merchant G.M. Golovanov, a masterpiece of wooden architecture not only in Tomsk, but throughout Siberia (Krasnoarmeyskaya st., 71),
  • The mansion of the architect S.V. Khomich (1904) - the architecture of this building is so eclectic that it rather resembles a fabulous gingerbread house (Belinsky, 19),
  • Tomsk State University - this building of the oldest university in Siberia, founded in 1888, is one of the most recognizable symbols of Tomsk (Lenin Avenue, 36),
  • Garrison House of Officers named after N.N. Yakovlev - a beautiful brick building with original decor (Lenin Avenue, 50),
  • Tomsk City Hall (1899) - an eclectic three-story mansion built of red brick and light sandstone located in the city center (Lenin Avenue, 73),
  • The estate of I.D. Astashev (1842) - a magnificent palace that once belonged to the gold miner Astashev, one of the most beautiful buildings in Tomsk (Lenin Avenue, 75).

Tomsk city of Russia photos

Tomsk views.

Tomsk Railway Station

Tomsk Railway Station

Tomsk Regional Drama Theater

Tomsk Regional Drama Theater

House with a Hipped Roof in Tomsk

House with a Hipped Roof in Tomsk

Author: Stanislav Smakotin

Sights of Tomsk

Monument to Happiness in Tomsk

Monument to Happiness in Tomsk

Monument to Anton Chekhov in Tomsk

Monument to Anton Chekhov in Tomsk

Red Mosque in Tomsk

Red Mosque in Tomsk

The questions of our visitors

  • Currently 3.07/5

Rating: 3.1 /5 (192 votes cast)

  • Tomsk Tourism
  • Tomsk Hotels
  • Bed and Breakfast Tomsk
  • Flights to Tomsk
  • Tomsk Restaurants
  • Tomsk Attractions
  • Tomsk Travel Forum
  • Tomsk Photos
  • All Tomsk Hotels
  • Tomsk Hotel Deals
  • Tomsk Hostels
  • Business Hotels Tomsk
  • Family Hotels Tomsk
  • Luxury Hotels in Tomsk
  • Romantic Hotels in Tomsk
  • Spa Hotels in Tomsk
  • 4-stars Hotels in Tomsk
  • 3-stars Hotels in Tomsk
  • Tomsk Hotels with Parking
  • Pet Friendly Hotels in Tomsk
  • Tomsk Hotels with a Pool
  • Tomsk Hotels with Air Conditioning
  • Tomsk Exotic Hotels
  • Tomsk Hotels with Walk-in Shower
  • Tomsk Hotels with Steam Room
  • Tomsk Hotels with Soundproof Rooms
  • Tomsk Hiking Hotels
  • Tomsk Hotels with Lounge
  • Tomsk Hotels with Bridal Suite
  • Tomsk Hotels with Sauna
  • Tomsk Hotels with Breakfast Buffet
  • Hotels near Chekhov Monument
  • Hotels near Monument to Happiness/ Monument Shhas Spoyu...
  • Hotels near Camp Garden
  • Hotels near Cabbage Monument im Anna
  • Hotels near Uncle Kolya, Monunment to a State Traffic Inspector
  • Hotels near NKVD Memorial Museum of Political Repression History
  • Hotels near Museum of History of Tomsk
  • Hotels near Voskresenskaya Mountain
  • Hotels near Tomsk Regional Local Lore Museum

Tomsk Forum

  • Europe    
  • Russia    
  • Siberian District    
  • Tomsk Oblast    
  • United Kingdom Forums
  • United States Forums
  • Europe Forums
  • Canada Forums
  • Asia Forums
  • Central America Forums
  • Africa Forums
  • Caribbean Forums
  • Mexico Forums
  • South Pacific Forums
  • South America Forums
  • Middle East Forums
  • Honeymoons and Romance
  • Business Travel
  • Train Travel
  • Traveling With Disabilities
  • Tripadvisor Support
  • Solo Travel
  • Bargain Travel
  • Timeshares / Holiday Rentals
  • Siberian District forums
  • Tomsk Oblast forums
  Topic Replies Last post
 
2
 
7
 
4
 
3
 
1
 
3
 
0 18 January 2016
 
3
 
4
 
5
 
5
 
4
 
4
  • GreenLeaders

Search form

Specifications.

Technology: PCI Express
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and higher integration of functions onto a single form factor module solution. One of the goals for M.2 is to be significantly smaller in the XYZ and overall volume than the HalfMini Card for the very thin computing Platforms (e.g., Notebook, Tablet/Slate Platforms) that require a much smaller solution. show less 5.x Specification
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building Add-in Cards or system boards to the PCI Express Card Electromechanical Specification 5.0. This specification does not describe the full set of PCI Express tests and assertions for these devices. show less 5.x Specification
This ECR adds a three-bit field called ‘OHC-E Support’ in Device Capabilities 3 Register to indicate OHC-E support details in a function. show less 6.x ECN
The primary objectives of this External Cable Specification for PCI Express 5.0 and 6.0 document are to provide • 32.0 GT/s and 64.0 GT/s electrical specifications for mated cable assembly and mated cable connector based on SFF-TA-1032 Specification, • specifications of sideband functions for sideband pins allocated in the SFF-TA-1032 Specification, and • guidelines for 32.0 GT/s and 64.0 GT/s electrical spec compliance testing. show less 6.x Specification
The concept of “Prefetchable” MMIO was originally needed to control PCI-PCI Bridges, which were allowed/encouraged to prefetch Memory Read data in prefetchable regions. MMIO that has read side-effects, e.g. locations that auto-increment when read, cannot safely be prefetched from, and so needed to be distinguished from regions that could. Additional uses/meanings evolved outside of the PCI-PCI Bridge context, but were not clearly defined in relation to behaviors outside of the PCI/PCIe fabric itself, leading to much industry confusion. Additionally, the PCI-PCI Bridge specification defined the Type1 (bridge) Function Base/Limit mechanism such that MMIO resources above 4GB can only be “Prefetchable,” and since the <4GB space is very limited by modern standards, it has been strongly encouraged for devices to always declare MMIO resource requests as “Prefetchable.” show less 6.x ECN
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and higher integration of functions onto a single form factor module solution. show less 4.x Specification
The primary objectives of this Internal Cable Specification for PCI Express 5.0 and 6.0 document are to provide 32 GT/s and 64 GT/s electrical specifications for mated cable assembly and mated cable connector based on SFF-TA-1016 Specification, specifications of sideband functions for sideband pins allocated in the SFF-TA-1016 Specification, and guidelines for 32 GT/s and 64 GT/s electrical spec compliance testing. show less 6.x Specification
This test specification is intended to confirm if a stand-alone Retimer is compliant to the PCIe Base Specification. This test specification is not intended to test Retimers based only on the Extension Devices ECN to the PCI Express Base Specification, Revision 3.x. This test specification only covers stand-alone Retimers in common clock mode, and is not intended to test Retimers integrated onto a platform or an add-in card. show less 5.x Specification
The focus of this specification is on PCI Express® (PCIe®) solutions utilizing the SFF-8639 (also known as U.2) connector interface. Form factors include, but are not limited to, those described in the SFF-8201 Form Factor Drive Dimensions Specification. Other form factors, such as PCI Express Card Electromechanical are documented in other independent specifications. show less 5.x Specification
This ECR introduces updated dimensioning and tolerances for the 48VHPWR header and plug. The 48VHPWR connector has been removed from the specification. The name for the new connector is 48V1x2. Ordering of the sense pins is changed so that Sense0 and Sense1 pins are located farthest from each other. This reordering is incompatible with the pre-existing connector definition. Per workgroup input, there are no known implementations of the 48VHPWR header and plug as of this publication date. References to external specifications are added. show less 5.x ECN
This document defines the “base” specification for the PCI Express architecture, including the electrical, protocol, platform architecture and programming interface elements required to design and build devices and systems. A key goal of the PCI Express architecture is to enable devices from different vendors to inter-operate in an open architecture, spanning multiple market segments including clients, servers, embedded, and communication devices. The architecture provides a flexible framework for product versatility and market differentiation. show less 6.x Specification
This document defines the “base” specification for the PCI Express architecture, including the electrical, protocol, platform architecture and programming interface elements required to design and build devices and systems. A key goal of the PCI Express architecture is to enable devices from different vendors to inter-operate in an open architecture, spanning multiple market segments including clients, servers, embedded, and communication devices. The architecture provides a flexible framework for product versatility and market differentiation.  show less 6.x Specification
Form factor specifications will become easier to develop because many sideband elements can be incorporated by reference. Commonality will increase, simplifying system design and implementation. Other sections of the Base specification will be enabled to cleanly make use of sideband mechanisms, where appropriate. show less 6.x ECN
PCI Express CEM Specification, Revision 5.1 Errata show less 5.x Errata
This specification contains the Class Code and Capability ID descriptions originally contained the PCI Local Bus Specification, bringing them into a standalone document that is easier to reference and maintain. This specification also consolidates Extended Capability ID assignments from the PCI Express Base Specification and various other PCI specifications. It is intended that this document be used along with the PCI Express Base Specification Revision 5.0. show less 1.x Specification
This ECR defines an optional mechanism for passing management messages between system software and a PCI Function using a mailbox interface in the Function’s memory mapped I/O (MMIO) space. It also includes mechanisms to advertise, locate, and extend MMIO I/O register blocks generically, the definition of the mailbox registers and command interface, and the management message passthrough (MMPT) interface. show less 6.x ECN
CMA-SPDM defines a mapping of the DMTF SPDM (DSP0274) specification for PCIe implementations. Since the initial publication of CMA-SPDM, multiple revisions of DSP0274 have been published. This set of revisions is intended to align with these updates, with DOE 1.1, and with current industry direction. Highlights include: • Alignment of CMA-SPDM with SPDM 1.2 and DOE 1.1 • Removal of outdated/redundant material, especially material that is now better covered in SPDM itself • Improved clarity of SPDM mapping to data objects • Correcting under-specified areas, especially regarding interoperable algorithm choices • Remove requirements placed on leaf certificates show less 6.x ECN
This proposal introduces a new version of the M.2 connector with improved amperage per pin to 1 A, card outline changes with increased component area options. show less 4.x ECN
This ECN defines Connector Type encodings for the new 12V-2x6 connector. This connector, defined in CEM 5.1, replaces the 12VHPWR connector. Update 12VHPWR encoding to reflect published CEM 5.0 (the Base Spec and CEM 5.0 were inconsistent). Update default measurement methodology for Maximum and Sustained power to match existing Form Factor specifications, providing consistency between specifications. show less 6.x ECN
This Card Electromechanical (CEM) specification is a companion for the PCI Express® Base Specification, Revision 5.0. Its primary focus is the implementation of an evolutionary strategy with earlier PCI™ desktop/server mechanical and electrical specifications. show less 5.x Specification
For the SFF-8639 module (U.2) specification, this ECR increases +3.3 VAux current. The category of SMBus inactive current is eliminated and replaced with a default current of 8 mA. Additionally, the 5 mA active current requirement is eliminated and replaced with a 25 mA requirement if MCTP or I3C Basic traffic is initiated by the platform when Vaux power is enabled while 12V is disabled. show less 4.x ECN
This Card Electromechanical (CEM) specification is a companion for the PCI Express® Base Specification, Revision 5.0. Its primary focus is the implementation of an evolutionary strategy with earlier PCI™ desktop/server mechanical and electrical specifications. show less 5.x Specification
This document defines the “base” specification for the PCI Express architecture, including the electrical, protocol, platform architecture and programming interface elements required to design and build devices and systems. A key goal of the PCI Express architecture is to enable devices from different vendors to inter-operate in an open architecture, spanning multiple market segments including clients, servers, embedded, and communication devices. The architecture provides a flexible framework for product versatility and market differentiation. show less 6.x Specification
This document defines the “base” specification for the PCI Express architecture, including the electrical, protocol, platform architecture and programming interface elements required to design and build devices and systems. A key goal of the PCI Express architecture is to enable devices from different vendors to inter-operate in an open architecture, spanning multiple market segments including clients, servers, embedded, and communication devices. The architecture provides a flexible framework for product versatility and market differentiation.  show less 6.x Specification
Informational testing on test case 54-20 will give DUT vendor information about their implementation of the capability. Not passing judgement on test case 54-20 will prevent DUTs from incorrect failing the DUT or false passing the DUT at Compliance Workshops. show less 5.x ECN
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and higher integration of functions onto a single form factor module solution. show less 5.x Specification
Defines a new wire semantic and related capabilities for addressing the limitations of the PCI/PCIe fabric-enforced ordering rules. Specifically:  Fabrics with multiple paths between a source and destination cannot be supported; posted Writes don’t match the semantics of other fabrics, in that the Requester doesn’t (directly) know if/when a write has actually completed; and writes flowing towards destinations with differing write performance can cause global stalls show less 1.x ECN
This proposal introduces a new version of the M.2 connector with improved amperage per pin to 1A, and card outline changes with increased component area options. show less 1.x ECN
This ECR removes the transmitter jitter test requirement for 32 GT/s systems. Transmitter jitter test remains a requirement for 32 GT/s Add-in Cards. show less 5.x ECN
4.x Errata
This specification contains the Class Code and Capability ID descriptions originally contained the PCI Local Bus Specification, bringing them into a standalone document that is easier to reference and maintain. This specification also consolidates Extended Capability ID assignments from the PCI Express Base Specification and various other PCI specifications. It is intended that this document be used along with the PCI Express Base Specification Revision 5.0. show less 1.x Specification
This ECN replaces existing drawings for the 12VHPWR cable plug with 2 different options. show less 5.x ECN
CXL 3.0 defines an alternate protocol and has asked for a DLLP assignment. They want to use a DLLP instead of a Flit_Marker to reduce their latency (this issue is unique to their protocol – PCIe does not have this issue).  show less 6.x ECN
Expands power excursion to 12V power rail in PCIE CEM connector in addition to 12VHPWR and 48VHPWR connectors but excludes legacy 2x3 and 2x4 auxiliary power connectors from power excursion specification. show less 5.x ECN
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and higher integration of functions onto a single form factor module solution. show less 4.x Specification
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and higher integration of functions onto a single form factor module solution. show less 4.x Specification
This ECR increases the specified leakage and capacitance tolerances for Auxiliary I/O signals. show less 4.x ECN
This ECR defines a revised and extended Data Object Exchange mechanism. This ECN builds 5 upon the content defined in the ECN that defined the original revision of Data Object Exchange, published 26 March 2020 (document date of 12 March 2020). show less 1.x ECN
This ECN adds 1.8V IO support to Type 1216, Type 2226, and Type 3026 LGAs. This support adds two previously defined pins to these LGAs: • VIO_CFG, a 1.8V IO support indication (one pin) • VIO 1.8V, a 1.8V IO Voltage source (one pin) The VIO 1.8 V signal is intended as an IO supply and reference voltage for host interface sideband signals PERST#, CLKREQ#, and PEWAKE# and additional signals such as SUSCLK, W_DISABLE1#, W_DISABLE2#. This provides IO voltage flexibility to enable IO voltage levels other than 3.3V in the applicable M.2 form factors. The VIO_CFG signal is intended to provide the Platform an indication of the IO voltage capabilities of the M.2 Adapter installed. In cases where the Platform detects that an incompatible Adapter is installed, the Platform may choose to not power the Adapter or isolate the affected sideband signals to avoid damage or interface instability. show less 4.x ECN
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. This document defines the “base” specification for the PCI Express architecture, including the electrical, protocol, platform architecture and programming interface elements required to design and build devices and systems. show less 1.x Specification
This is a change bar version of the PCI Express Base 6.0 Specification comparing Base 6.0/1.0 to Base 6.0.1/1.0 show less 1.x Specification
This document defines the TEE Device Interface Security Protocol (TDISP) - An architecture for trusted I/O virtualization providing the following functions: 1. Establishing a trust relationship between a TVM and a device. 2. Securing the interconnect between the host and device. 3. Attach and detach a TDI to a TVM in a trusted manner. show less 5.x ECN
Expansion of example methods for lane margining testing to create different electrical link conditions for the two test runs. These changes apply to both Add-in Card and System testing. Adding a repeatability test option as proof that lane margining measurement is implemented.   show less 1.x ECN
This test specification is intended to confirm if a stand-alone Retimer is compliant to the PCI Base Specification. show less 4.x Specification
Summary of the Functional Changes I. Add core voltages 0.75 V in PWR_3 rail for BGA SSD. II. Add new pin configuration including 0.75 V pin. show less 4.x ECN
Summary of the Functional Changes Changing the RFU pin in the 12VHPWR connector sideband to Sense1. Adding 150W and 300W power capabilities to the encoding options for Sense0 and Sense1. Makes Card_CBL_PRES required to be tied to ground with 4.7 kΩ resistor. show less 5.x ECN
Transmitter jitter requirements at 32 GT/s are being added for system board and Add-in Card. Affected Document: PCI Express Card Electromechanical Specification Revision 5.0, Version 1.0 show less 5.x ECN
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building Add-in Cards or system boards to the PCI Express Card Electromechanical Specification 5.0. show less 5.x Specification
This document primarily covers PCI Express testing of all defined Device Types and RCRBs for the standard Configuration Space mechanisms, registers, and features. show less 5.x Specification
This test specification primarily covers testing of PCI Express® Device and Port types for compliance with the link layer and transaction layer requirements of the PCI Express Base Specification. show less 5.x Specification
4.x ECN
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building Add-in Cards or system boards to the PCI Express Card Electromechanical Specification 4.0.   show less 4.x Specification
There are two informative "changebar" versions of the PCI Express Base 6.0 Specification comparing Base 6.0/1.0 to Base 6.0/0.9 and comparing Base 5.0 to Base 6.0. show less 6.x Specification
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. This document defines the “base” specification for the PCI Express architecture, including the electrical, protocol, platform architecture and programming interface elements required to design and build devices and systems. show less 6.x Specification
This is a change bar version of the Integrity and Data Encryption (IDE) ECN – Revision A. show less 5.x ECN
Original IDE ECN plus IDE items included in final Base 5.0 Errata. Changebar version relative to original IDE ECN also available. show less 5.x ECN
...view more Corresponds to Errata included in Base 6.0, Version 0.9 and Version 1.0 show less 5.x Errata
This ECR establishes two operational modes for use of the Power Disable (PWRDIS) signal. The existing mode allowed use of the signal for coordinated shutdown of the PCIe device, but was optimized for a power-on reset of a non-responsive device. The new mode reduces PWRDIS minimum asserted hold time from 5 s to 100 ms for use in a coordinated shutdown with an emphasis on entry and exit times from D3cold. show less 4.x ECN
This change allows for cards to exceed maximum power levels currently defined in the CEM spec to enable higher performance for certain workloads. This change clearly defines limits for these excursions to allow system designers to properly design power subsystems to enable these excursions. show less 5.x ECN
The long-standing requirement for a component’s LTSSM to enter Detect state within 20 ms of the end of Fundamental Reset is relaxed (extended) to 100 ms for components that support >5 GT/s Link speeds. show less 5.x ECN
4.x ECN
This specification contains the Class Code and Capability ID descriptions originally contained the PCI Local Bus Specification, bringing them into a standalone document that is easier to reference and maintain. This specification also consolidates Extended Capability ID assignments from the PCI Express Base Specification and various other PCI specifications. It is intended that this document be used along with the PCI Express Base Specification Revision 5.0. show less 1.x Specification
Add 3052 and 3060 form factors for WWAN modules using Socket 2 with Key B and Key C. show less 4.x ECN
5.x Errata
Describes a method to measure Tx jitter parameters at 32 GT/s accurately by using a Jitter Measurement Pattern. This replaces the S-parameter de-embedding method. In this method, a CTLE-based equalization instead of S-parameter based de-embedding gain filter is applied to the captured Tx waveform to mitigate signal degradation due to frequency-dependent channel loss. The CTLE-based equalization is defined by the 32 GT/s reference CTLE curves. The proposed method with the use of clock pattern in the lane under test and compliance pattern in other lanes avoids the inaccuracy of the S-parameter based de-embedding that results from the amplification of the real-time oscilloscope floor noise by the de-embedding gain filter. The Tx jitter measurement methods for 8.0 and 16.0 GT/s have been kept unchanged. show less 5.x ECN
Introduces a fitting-based Tx preset measurement methodology that extracts Tx equalization coefficients from measured step responses with and without Tx equalization. Consequently, it overcomes a few limitations of the current DC voltage-level based methodology where the ratio of DC voltage levels of various presets is used to avoid measurement complexity due to high frequency-dependent loss. Since the use of the ratio of DC voltage levels do not guarantee the correct use of Tx equalization coefficients and constant voltage swing across presets, the existing DC voltage-level based measurement methodology may give incorrect results if the Tx equalization coefficients and voltage swing significantly deviate from the intended values for the specified Tx presets. show less 5.x ECN
This is a change to the PCI Express Base Specification, Revision 5.0. show less 5.x ECN
This is a change to the PCI Express Base Specification, Revision 5.0. show less 5.x ECN
This Card Electromechanical (CEM) specification is a companion for the PCI Express® Base Specification, Revision 5.0. Its primary focus is the implementation of an evolutionary strategy with earlier PCI™ desktop/server mechanical and electrical specifications. show less 5.x Specification
This Card Electromechanical (CEM) specification is a companion for the PCI Express® Base Specification, Revision 5.0. Its primary focus is the implementation of an evolutionary strategy with earlier PCI™ desktop/server mechanical and electrical specifications. show less 5.x Specification
The focus of this specification is on PCI Express® (PCIe®) solutions utilizing the SFF-8639 connector interface. Form factors include, but are not limited to, those described in the SFF-8201 Form Factor Drive Dimensions Specification. Other form factors, such as PCI Express Card Electromechanical are documented in other independent specifications. show less 4.x Specification
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
Proposes repurposing five RFU pins in the Type 1113 (11.5mm x 13mm) BGA ball map to be optionally used for indicating PWR1, PWR2, and PWR3 supply voltage requirements to the platform. If this ECR is implemented then all 5 pins need to be implemented. Adds a 1.0 V power supply option for PWR3 for both BGA1113 and BGA1620. show less 4.x ECN
5.x Errata
Integrity & Data Encryption (IDE) provides confidentiality, integrity, and replay protection for TLPs. It flexibly supports a variety of use models, while providing broad interoperability. The cryptographic mechanisms are aligned to current industry best practices and can be extended as security requirements evolve. The security model considers threats from physical attacks on Links, including cases where an adversary uses lab equipment, purpose-built interposers, malicious Extension Devices, etc. to examine data intended to be confidential, modify TLP contents, & reorder and/or delete TLPs. TLP traffic can be secured as it transits Switches, extending the security model to address threats from reprogramming Switch routing mechanisms or using “malicious” Switches. Compared to the Member Review copy, and consistent with the “NOTICE TO REVIEWERS” in that copy, this final revision contains significant revisions to the key management protocol in order to align it closely with the DMTF’s Secured Messages using SPDM Specification, which was not available at the time the Member Review copy was prepared. Additionally, the final copy includes significant improvements in protection against Adversary-in-the-Middle attacks, and, consistent with member feedback received in response to the query regarding key size for AES-GCM applied to IDE TLPs, supports only the 256b key size. show less 5.x ECN
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and higher integration of functions onto a single form factor module solution. show less 4.x Specification
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and higher integration of functions onto a single form factor module solution. show less 4.x Specification
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building Add-in Cards or system boards to the PCI Express Card Electromechanical Specification 4.0. This specification does not describe the full set of PCI Express tests and assertions for these devices show less 4.x Specification
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building Add-in Cards or system boards to the PCI Express Card Electromechanical Specification 4.0. This specification does not describe the full set of PCI Express tests and assertions for these devices show less 4.x Specification
Loosens restrictions on use of PASID to allow PASID to be applied to Memory Requests using Translated addresses (AT=Translated). show less 5.x ECN
1.x ECN
Revision B (July 22, 2020) corrects an errata in the original revision (November 28, 2018). PWRDIS timings were incorrectly specified as a maximum when they are meant to be specified as a minimum value. The affected portion is highlighted in Table 3-26 PWRDIS AC characteristics. show less 1.x ECN
2.x ECN
This document is a companion specification to the PCI Express Base Specification and other PCI Express® documents listed in Section 1.1. The primary focus of the PCI Express OCuLink Specification is the implementation of internal and external small form factor PCI Express connectors and cables. This form factor supports multiple market segments, from client, mobile, server, datacenter, and storage. This specification discusses cabling and connector requirements to meet the 8.0 GT/s signaling needs in the PCI Express Base Specification. show less 1.x Specification
The changes are to Socket-1, Keys E and A-E Addition of 1.8V IO support type (1 pin) Addition of 1.8V IO Voltage source (1 pin) Addition of WI-FI_DISABLE and BT_DISABLE signals overlaid onto W_DISABLE1#, W_DISABLE2# Additional antenna assignment which allows for multiple Bluetooth antennas. show less 3.x ECN
Add core voltages 0.8 V in PWR_3. II. Add new pin configuration including 0.8 V. show less 3.x ECN
Smaller lithography has led to smaller pad sizes which has increased parasitics on inputs. For the Card Electromechanical Specification, this ECR increases Cin, the maximum input pin capacitance on 3.3 V logic signals (applies to PERST# and PWRBRK#) from 7 pF to 20 pF (see CEM Table 3). For the M.2 specification, this ECR increases M.2 CIN, the maximum input pin capacitance for both 3.3 V logic signal (applies to PERST#, see M.2 Table 4-1) and 1.8 V logic signals (applies to PERST# and PEWAKE# (when used for OBFF signaling), see M.2 Table 4-2) from 10 pF to 20 pF. This capacitance increase is large enough for known upcoming lithographies. It had not been clear what the measurement point was for CIN in CEM or M.2 specifications. This 18 ECN extends the COUT measurement point specified in M.2 to apply to CIN and COUT for both 19 CEM and M.2 specifications. show less 4.x ECN
This proposal introduces an additional width, component heights, and the ability to specify the top surface as a planar. show less 3.x ECN
This is a companion specification to the PCI Express Base Specification. The primary focus of this specification is the implementation of cabled PCI Express®. No assumptions are made regarding the implementation of PCI Express-compliant Subsystems on either side of the cabled Link (PCI Express Card Electromechanical (CEM), ExpressCard™, ExpressModule™, PXI Express™, or any other form factor). Such form factors are covered in separate specifications show less 4.x Specification
This is a companion specification to the PCI Express Base Specification. The primary focus of this specification is the implementation of cabled PCI Express®. No assumptions are made regarding the implementation of PCI Express-compliant Subsystems on either side of the cabled Link (PCI Express Card Electromechanical (CEM), ExpressCard™, ExpressModule™, PXI Express™, or any other form factor). Such form factors are covered in separate specifications show less 3.x Specification
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building Add-in Cards or system boards to the PCI Express Card Electromechanical Specification 4.0. This specification does not describe the full set of PCI Express tests and assertions for these devices. show less 4.x Specification
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building Add-in Cards or system boards to the PCI Express Card Electromechanical Specification 4.0. This specification does not describe the full set of PCI Express tests and assertions for these devices. show less 4.x Specification
This ECR defines an adaptation of the data objects and underlying protocol defined in the DMTF SPDM specification ( ) for PCIe components, providing a mechanism to verify the component configuration and firmware/executables (Measurement) and hardware identities (Authentication). “Firmware” in this context includes configuration settings in addition to executable code. This protocol can be operated via the Data Object Exchange (DOE) mechanism, or through other means, for example via MCTP messaging conveyed using PCIe Messages, or via SMBus, I2C, or other management I/O. Data Object Exchange (DOE) is defined in the Data Object Exchange ECN to the PCIe Base Specification Rev 4.0, 5.0, approved on 12 Mar 2020. show less 5.x ECN
This ECN defines a new Request, the Deferrable Memory Write (DMWr), that requires the Completer to return an acknowledgement to the Requester, and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request. To provide space for the required control registers, this ECN also defines an Extended Capability, the Device 3 Extended Capability, to provide Device Capabilities 3, Device Control 3, and Device Status 3 registers. This Extended Capability is required for DMWr support, but can also be applied to other uses besides DMWr, and therefore can be implemented by Functions that do not support DMWr. show less 5.x ECN
This ECR defines an optional Extended Capability structure and associated control mechanisms to provide System Firmware/Software the ability to perform data object exchanges with a Function or RCRB. To support Component Measurement and Authentication (CMA) accessible in-band via host firmware/software, Data Object Exchange (DOE) is required, and CMA motivates the need for DOE, although broader uses are anticipated. show less 5.x ECN
Revision A (March 4, 2020) corrects an oversight in the original revision (November 28, 2018). The Socket 2 Key B PCIe/USB3.1 Gen1-based WWAN Adapter Pinout was not updated to reflect the addition of 1.8V sideband support like the other tables. The affected portion is highlighted in Table 33 Socket 2 Key B PCIe/USB3.1 Gen1-based WWAN Adapter Pinout. show less 3.x ECN
This specification contains the Class Code and Capability ID descriptions originally contained the PCI Local Bus Specification, bringing them into a standalone document that is easier to reference and maintain. This specification also consolidates Extended Capability ID assignments from the PCI Express Base Specification and various other PCI specifications. show less 1.x Specification
Revision A (January 30, 2020) corrects an error in the original revision (September 29, 2019). The ATS Memory Attributes Supported field in the ATS Capabilities Register is now assigned bit location 8. It was previously assigned bit location 7. The affected text is highlighted in Table 10-9: ATS Capability Register. show less 5.x ECN
4.x ECN
ATS Memory Attributes ECN, Revision A 4.x ECN
4.x ECN
5.x ECN
Shadow Functions are permitted to be assigned only where currently unused Functions reside. The Function declaring the shadowing is permitted to overflow its Transaction ID space over into the Shadow Function. The impetus for defining Shadow Functions is to provide more Transaction ID space without increasing the Tag field, since there are no straightforward means to do that at the current time. show less 5.x ECN
Due to ambiguity in earlier versions of the PCIe Base Specification two different interpretations of the byte positions in the PTM ResponseD Message Propagation Delay field have been implemented. This ECR defines mechanisms that new hardware can implement to support the adaptation to either of the interpretations. show less 5.x ECN
4.x ECN
5.x ECN
This is a companion specification to the PCI Express Base Specification. The primary focus of this specification is the implementation of cabled PCI Express®. No assumptions are made regarding the implementation of PCI Express-compliant Subsystems on either side of the cabled Link (PCI Express Card Electromechanical (CEM), ExpressCard™, ExpressModule™, PXI Express™, or any other form factor). Such form factors are covered in separate specifications. show less 3.x Specification
5.x Errata
DISCLAIMER: Table A-1, Bytes 0 to 127 (Lower Memory Fields), contained an error in the 1.0 Specification. Byte 0, Identifier, was 0Eh and has been changed to 1Ch.   show less 1.x Specification
Final Release against Base Revision 4.0 show less 4.x Errata
4.x Errata
High Volume Manufacturing (HVM) and other manufacturer test processes benefit from the ability to set Add-in Card (AIC) modes that enable multiplexing of standard connector pins for test specific use. This ECR defines a method to allow the system to enable a Manufacturer Test Mode (MFG) on the AIC through the standard interface connector prior to shipping the AIC. This ECR to the CEM specification is consistent with Manufacturing Mode ECN to the SFF-8639 specification. show less 4.x ECN
This specification is a companion for the PCI Express® Base Specification, Revision 4.0. Its primary focus is the implementation of an evolutionary strategy with the current PCI™ desktop/server mechanical and electrical specifications. show less 4.x Specification
This specification is a companion for the PCI Express® Base Specification, Revision 4.0. Its primary focus is the implementation of an evolutionary strategy with the current PCI™ desktop/server mechanical and electrical specifications. show less 4.x Specification
This document primarily covers PCI Express testing of all defined PCI Express device types and RCRBs for the standard Configuration Space mechanisms, registers, and features in Chapter 6 of the PCI Local Bus Specification (Base 3.x or earlier only) and Chapters 7, 9 (Base 4.x or later only), 10 (Base 4.x or later only) of the PCI Express Base Specification (some additional tested registers are described in other specifications that are referenced in the individual test description). This specification does not describe the full set of PCI Express tests for these devices. show less 4.x Specification
This test specification primarily covers testing of PCI Express® Device and Port types for compliance with the link layer and transaction layer requirements of the PCI Express Base Specification. Device and Port types that do not have a link (e.g., Root Complex Integrated Endpoints, Root 10 Complex Event Collectors) are not tested under this test specification. While the test environment can accommodate the presence of a Retimer, it will not test the Retimer itself. At this point, this test specification does not describe the full set of PCI Express tests for all link layer or transaction layer requirements. show less 4.x Specification
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building Add-in Cards or system boards to the PCI Express Card Electromechanical Specification 4.0. This specification does not describe the full set of PCI Express tests and assertions for these devices. show less 4.x Specification
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and higher integration of functions onto a single form factor module solution. show less 3.x Specification
There are four informative "changebar" versions of the PCI Express Base 5.0 Specification comparing Base 5.0/1.0 to Base 5.0/0.9 and comparing Base 4.0 and Base 5.0. HTML and PDF versions are provided. Both versions are derived from common source material but have different characteristics, and readers may wish to reference both. These documents are non-normative - the  is the normative version of this specification. show less 5.x Specification
3.x Specification
3.x Specification
This specification describes the PCI Express architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. show less 5.x Specification
This proposal introduces 8GT/s electrical compliance details for M.2 based SSDs.  show less 3.x ECN
This proposal repurposes five RFU pins in the 16 mm x 20 mm BGA ball map to be optionally used for indicating PWR1, PWR2, and PWR3 supply voltage requirements to the Platform.  show less 3.x ECN
4.x ECN
This ECN defines four new services under ACS for Downstream Ports, primarily to address issues when ACS redirect mechanisms are used to ensure that DMA Requests from Functions under the direct control of VMs are always routed correctly to the Translation Agent in the host. Three of the services provide redirect or blocking of Upstream Memory Requests that target areas not covered by other ACS services. The fourth service enables the blocking of Upstream I/O Requests, addressing a concern with VM-controlled Functions maliciously sending I/O Requests. show less 4.x ECN
This is a modification of the cable assembly memory map defined in the OCuLink Memory Map ECN. Some bits contained within the external cable assembly's memory are modified to allow for cable aggregation. show less 3.x ECN
This is a modification of the cable assembly memory map defined in the OCuLink Memory Map ECN. Some bits contained within the external cable assembly's memory are modified to allow for cable aggregation. show less 3.x ECN
High Volume Manufacturing (HVM) benefits from the ability to set manufacturing specific SFF8639 Module modes that enable multiplexing of standard connector pins for manufacturing test specific use. This ECR is to define a method to allow the Host to enable Manufacturing Mode on the SFF-8639 Module through the standard interface connector. The Manufacturing Mode solution proposed is consistent with that already approved in SNIA SFFTA-1001 Revision 1.1 for SFF-8639 use. show less 3.x ECN
Changes are requested to be made to Section 4.5.1, _OSC Interface for PCI Host Bridge Devices and Section 4.5.2.4 Dependencies Between _OSC Control Bits. The changes will enable the Operating System to advertise if it is capable of support _HPX PCI Express Descriptor Setting Record (Type 3) to the firmware. It also enables the Operating System and the Firmware to negotiate ownership of the PCIe Completion Timeout registers. show less 3.x ECN
The changes effect the PCI Firmware Specification, Revision 3.2 and will enable the Operating System to advertise its Downstream Port containment related capabilities to the firmware. It also enables the Operating System and the Firmware to negotiate ownership of Downstream Port Containment extended capability register block and collaboratively manage Downstream Port Containment events. show less 3.x ECN
4.x ECN
This ECN effects the PCI Express Base Specification, Version 4.0. ePTM is an improvement on the existing Precision Time Measurement capability that provides improved detection and handling of error cases. ePTM quickly identifies and resolves errors that may cause clocks to become desynchronized. show less 4.x ECN
4.x ECN
This ECN introduces multiple features for M.2 and affects the PCIe M.2 Specification Revision 1.1 and the PCIe BGA SSD 11.5x13 ECN. show less 1.x ECN
This ECN updates several areas related to hot-plug functionality, mostly related to Async hot-plug, which is now growing in importance due to its widespread use with NVMe SSDs. All new functionality is optional. This ECN affects the PCIe 4.0 Base Specification. show less 4.x ECN
Several dimensions included in Chapter 9 of the OCuLink 1.0 Specification are repeated from previous chapters. 7 Repeated dimensions have been removed and additional pointers have been added to direct users where to find 8 more information about various OCuLink implementations. show less 1.x ECN
The changes affect the PCI Firmware Specification, Revision 3.2 and enable the MCFG table format to allow for multiple memory mapped base address entries, instances of Table 4-3, per single PCI Segment Group. show less 3.x ECN
4.x Errata
4.x Errata
Drawings and dimensions for the x4 form factor have been corrected and clarified. show less 1.x ECN
4.x ECN
This ECN enhances Root Complex Event Collectors (RCECs) to allow them to be associated with Devices located on additional Bus numbers. show less 4.x ECN
4.x ECN
4.x ECN
This ECN allows Address Translation Requests and Completions to support the Relaxed Ordering bit, where are currently defined to be Reserved for these types of TLPs. The proposal preserves interoperability with older Translation Agents. show less 4.x ECN
4.x ECN
The cable presence (CPRSNT#) signal was incompletely and inaccurately specified in the original OCuLink 1.0 specification. The definition for the logic levels of this signal contradicted the active low naming convention. The direction has multiple contradictions. show less 1.x ECN
The OCuLink workgroup has received feedback that the information included in the specification regarding cable/ Port aggregation was unclear, particularly with respect to sideband management. Wording in sections relating to cable/ Port aggregation and sideband management has been reworked to be clearer. show less 1.x ECN
The focus of this specification is on PCI Express (PCIe®) solutions utilizing the SFF-8639 connector interface. Form factors include, but are not limited to, those described in the SFF-8201 Form Factor Drive Dimensions Specification. Other form factors, such as PCI Express CEM are documented in other independent specifications. show less 3.x Specification
...view more The ECN provides clarifications for requirements that affect both systems implementers and cable assembly suppliers. The revisions will save time and confusion for the implementation of the optional external OCuLink cables. show less 1.x ECN
This change notice redefines the outer most ring of ground pins in the 11.5x13 BGA ball map to be redundant ground pins that are non-critical to function (NCTF).NCTF is a new pin definition indicating that while the pins shall continue to be connected to host and device ground, they are redundant such that they allow for mechanical failure but not functional failure. show less 1.x ECN
2.x ECN
The IL/ fitted IL requirements have been clarified. The language of this portion of the spec has been reworked to eliminate confusion and provide uniformity in the subsections included in the following document. show less 1.x ECN
The IL/ fitted IL requirements have been clarified. The language of this portion of the spec has been reworked to eliminate confusion and provide uniformity in the subsections included in the following document. show less 1.x ECN
This is a modification of the connector/cable performance tables defined in OCuLink 1.0, Section 6.9 and updated by the OCuLink Server Change ECN. The tables are reorganized to make this section of OCuLink more functional to the end user. Some table entry values are changed. show less 1.x ECN
Final Release against Base Revision 3.1a.Subsequent Errata will be against Base Revision 4.0 show less 3.x Errata
Link Activation allows software to temporarily disable Link power management, enabling the avoidance of the architecturally mandated stall for software-initiated L1 Substate exits. show less 4.x ECN
4.x ECN
4.x ECN
The M.2 Type 1216 Land Grid Array (LGA) Connectivity module is modified to add a second PCIe lane. Referring to Figure 99 on page 127 show less 1.x ECN
This proposal adds an additional voltage value to the PWR_1 rail in the PCIe BGA SSD 11.5x13 ECR. Table 3 of section 3.4 in the document “PCIe BGA SSD 11.5x13 ECR”, defines the PWR_1 signal as a 3.3V source. This is changed to now also include a 2.5V rail. show less 1.x ECN
This ECN adds two capabilities by way of adding functions to the PCI Firmware Spec defined _DSM definition. show less 3.x ECN
1.x ECN
1.x ECN
3.x ECN
a. The connector and cable assembly pinout tables have been revised to show the complete OCuLink pinout assignments in all cases. b. The two left-most columns in the cable pinout tables have been combined for clarity. c. Due to the fact that the pinout tables span multiple pages, P1/P2 designations have been included in the appropriate column titles of the cable pinout tables to make it easier to follow which end of the cable is being addressed on each page in each table.  show less 1.x ECN
The backplane type (BP Type) signal was incompletely specified in the original OCuLink 1.0 specification. Table 2-2 now includes a Type of logic used for this signal. A definition is provided for the logic levels of this signal. show less 1.x ECN
a. The connector and cable assembly pinout tables have been revised to show the complete OCuLink pinout assignments in all cases. b. The two left-most columns in the cable pinout tables have been combined for clarity. c. Due to the fact that the pinout tables span multiple pages, P1/P2 designations have been included in the appropriate column titles of the cable pinout tables to make it easier to follow which end of the cable is being addressed on each page in each table. show less 1.x ECN
The backplane type (BP Type) signal was incompletely specified in the original OCuLink 1.0 specification. Table 2-2 now includes a Type of logic used for this signal. A definition is provided for the logic levels of this signal.  show less 1.x ECN
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. show less 4.x Specification
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. show less 4.x Specification
Defines mechanisms for simple storage enclosure management for NVMe SSDs, consistent with established capabilities in the storage ecosystem, with the first version of this capability defining a register interface for LED control. This ECN defines a new PCI Express extended capability called Native PCIe Enclosure Management (NPEM). show less 3.x ECN
Provide an optional mechanism to indicate to software the results of a hardware validation of Expansion ROM contents. show less 4.x ECN
4.x ECN
This ECN specifies changes to the PCI Local Bus Specification Revision 3.0 and the PCI Express CEM Specification 3.0. Changes to the PCI Local Bus Specification cover a new VPD encoding and a 32-bit field. Changes to the PCI Express CEM Specification cover a series of graphs used to classify air flow impedance and thermal properties under varying conditions as well as the test figure and process to create these graphs for a given adapter add-in card. Adapter add-in card types supported by this include all SINGLE-SLOT and DUAL-SLOT PCIe CEM adapter add-in cards without integrated air movers, including standard height adapter add-in cards as well as low-profile adapter add-in cards). Adapter add-in cards with an integrated air mover were not addressed due to the added complication of their integrated air mover in the overall platform’s potential cooling redundancy. show less 3.x ECN
Defines a new, optional PCI-SIG Defined Type 1 Vendor Defined Message. This message provides software and/or firmware, running on a Function, additional information to uniquely identify that Function, within a large system or a collection of systems. When a single system contains multiple PCI Express Hierarchies, this message tells a Function which Hierarchy it resides in. This value, in conjunction with the Routing ID number uniquely identifies a Function within that system. In clustered system, this message can include a System Globally Unique Identifier (System GUID) for each system. This value, in conjunction with the Hierarchy ID and Routing ID uniquely identifies a Function within that cluster. show less 3.x ECN
M.2 Key B (WWAN) is modified to enable PCIe and USB 3.1 Gen1 signals to be simultaneously present on the connector. This enables support for a single SKU M.2 card that supports both PCIe and USB 3.1 Gen1. There are two implementation options enabled: 1. State #14 in the “Socket 2 Add-in Card Configuration Table” is re-defined to indicate an Add-in Card built to the PCI Express M.2 Specification, Revision 1.1 or later where both PCIe and USB 3.1 Gen1 are both present on the connector. The choice of Port Configuration is vendor defined. This enables the host to unambiguously determine that PCIe and USB 3.1 Gen1 are present on the connector. 2. States #4, 5, 6, 7 in the “Socket 2 Add-in Card Configuration Table” are re-defined to indicate that in addition to USB 3.1 Gen1, PCIe may be present on the connector. This definition was used by M.2 cards built to the PCI Express M.2 Specification, Revision 1.0 (USB 3.1 Gen1 on connector; PCIe is “no connect”). This definition is now also permitted to be used by M.2 cards built to PCI Express M.2 Specification, Revision 1.1 or later to indicate that PCIe and USB 3.1 Gen1 are both present on the connector. This allows GPIO port configurations to remain consistent with all other existing states. show less 1.x ECN
This ECR is intended to address a class of issues with PCI/PCIe architecture that relate to resource allocation inefficiency. To explain this, first we must define some terms: Static use cases, refer to scenarios where resources are allocated at system boot and then typically not changed again Dynamic use cases, refer to scenarios where run-time resource rebalancing (allocation of new resources, freeing of resources no longer needed) is required, due to hot add/remove, or by other needs. In the Static cases there are limits on the size of hierarchies and number of Endpoints due to the Bus & Device Number “waste” caused by the PCI/PCIe architectural definition for Switches, and by the requirement that Downstream Ports associate an entire Bus Number with their Link. This proposal addresses this class of problems by “flattening” the use of Routing IDs so that Switches and Downstream Ports are able to make more efficient use of the available space. show less 3.x ECN
1.x ECN
This proposal adds a new 11.5 mm x 13 mm PCIe BGA SSD form factor to the M.2 v1.1 specification.  show less 1.x ECN
2.x ECN
A PCI Express Receiver is required to tolerate 6 ns of lane to lane skew when operating at 8.0 GT/s. The PCI Express OCuLink Specification allowed the cable assembly to consume the entire budget. The Transmitter and traces routing to the OCuLink connector need some of this budget. The PCI Express Card Electromechanical Specification Revision 3.0 assigns 1.6 ns to the total interconnect lane to lane skew budget. show less 1.x ECN
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini 3 Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in 4 both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and 5 higher integration of functions onto a single form factor module solution. show less 1.x Specification
The M.2 form factor is intended for Mobile Adapters. The M.2 is a natural transition from the Mini 3 Card and Half-Mini Card (refer to the PCI Express Mini CEM Specification) to a smaller form factor in 4 both size and volume. The M.2 is a family of form factors that enables expansion, contraction, and 5 higher integration of functions onto a single form factor module solution. show less 1.x Specification
This specification defines an implementation for small form factor PCI Express cards. The specification uses a qualified subset of the same signal protocol, electrical definitions, and configuration definitions as the PCI Express Base Specification, Revision 2.0. Where this specification 5 does not explicitly define PCI Express characteristics, the PCI Express Base Specification governs. show less 2.x Specification
This specification defines an implementation for small form factor PCI Express cards. The specification uses a qualified subset of the same signal protocol, electrical definitions, and configuration definitions as the PCI Express Base Specification, Revision 2.0. Where this specification 5 does not explicitly define PCI Express characteristics, the PCI Express Base Specification governs. show less 2.x Specification
This is a modification of the cable assembly memory map defined in OCuLink 1.0, Appendix A. The addresses for the data bytes contained within the external cable assembly's memory will be reorganized. In addition, some data in these fields are modified. show less 1.x ECN
Table 6-12 and Table 6-13 in Section 6.9 are modified to reflect connector requirements for server/datacenter segment. In addition, this proposal also reflects a clarification in the Introduction text, Section 1 to include the server/datacenter market segment. show less 1.x ECN
1.x Errata
Similar to, and based on, the Resizable BAR and Expanded Resizable BAR ECNs, this optional ECN adds a capability for PFs to be able to resize their VF BARs. This ECN is written with the expectation that the Expanded Resizable BAR ECN will have been released prior to this ECN’s release. This ECN supports all of the BAR sizes defined by both the Resizable BAR and Expanded Resizable BAR ECNs. show less 3.x ECN
Update SR-IOV specification to reflect current PCI Code and ID Assignment Specification, regarding PCI capabilities and PCI-E extended capabilities. Clarify the requirements for VFs regarding the other Capabilities added by ECNs that should have updated the SR-IOV specification but did not. show less 3.x ECN
MSI is enhanced to include an Extended Message Data Field for the function generating the interrupt. The MSI Capability Structure is modified to enable the new feature to be enabled/disabled; and a new Extended Message Data Field to be configured. This change only applies to MSI and not MSI-X. show less 3.x ECN
The Resizable BAR capability currently allows BARs of up to 512 GB (239), which allows address bits <38:0> to be passed into an Endpoint. This proposal extends resizable BARs to up to 263 bits, which supports the entire address space. show less 3.x ECN
Definition of electrical eye limits (Eye Height and Eye Width) at the M.2 connector for SSIC host and device transmitter is proposed to be added in the specification. show less 3.x ECN
This ECN implements a variety of spec modifications intended to correct inconsistencies related to, and to support more consistent implementation of, Root Complex integrated Endpoints, with a particular focus on issues relating to Single Root IO Virtualization (SR-IOV). show less 3.x ECN
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification.  show less 3.x Specification
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification.  show less 3.x Specification
1.x ECN
This ECN defines two sets of related changes to support an Emergency Power Reduction mechanism and to provide software visibility for this mechanism: 1. The Card Electromechanical Specification is updated to define an optional Emergency Power Reduction mechanism using RSVD pin B30. 2. The PCI Express Base Specification is updated to define an optional mechanism to indicate support for Emergency Power Reduction and to provide visibility as to the power reduction status of a Device. show less 3.x ECN
This ECN is intended to define a new form-factor and electrical pinout to the M.2 family. This proposal will allow PCIe and SATA to be delivered using a BGA package, expanding the use of the PCIe and SATA protocols in small form-factor applications. The new BGA pinout content is based on the Socket 3 Key-M definitions. BGA pinout supports additional pins than defined for Socket-3, for soldered-down form-factors. show less 1.x ECN
This document is a companion Specification to the PCI Express Base Specification and other PCI Express® 2 documents listed in Section 1.1. The primary focus of the PCI Express OCuLink Specification is the implementation 3 of internal and external small form factor PCI Express® connectors and cables optimized for the client and mobile 4 market segments. This Specification discusses cabling and connector requirements to meet the 8.0 GT/s signaling 5 needs in the PCI Express Base Specification. 6 No assumptions are made regarding the implementation of PCI Express compliant components on either side of 7 the Link; such components are addressed in other PCI Express Specifications. show less 1.x Specification
1.x ECN
3.x Errata
Define a Vendor-Specific Extended Capability that is not tied to the Vendor ID of the Component 5 or Function. This capability includes a Vendor ID that determines the interpretation of the remainder of the capability. It is otherwise similar to the existing Vendor-Specific Extended Capability. show less 3.x ECN
This ECR describes the necessary changes to enable a new WWAN Key C definition to be included as an addition to the existing spec. The intent is to create a dedicated WWAN socket Key and pinout definition. This new pinout definition will be focused on WWAN specific interfaces and needs.  show less 1.x ECN
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express systems in a host computer. show less 3.x Specification
Add 2242 form factor for WWAN modules using Socket 2 with key B. show less 1.x ECN
3.x Errata
In ECN “Power-up requirements for PCIe side bands (PERST#, etc.)” - submitted by Dave Landsman and Ramdas Kachare - section 3.1.3.2.1 is redefined to provide a more realistic timing model for reset. show less 1.x ECN
The proposed change is to include 2 GNSS Aiding signals, that we already have allocated in the Type 1216 pinout, to the Socket 1 Key E pinout and Type 2226/3026 pinout. Due to lack of free pins in the Key E pinout, it is proposed to define 2 SDIO Input signals as dual functional pins. They would be defined with their original SDIO functionality along with and alternate GNSS Aiding signals functionality to enable a GNSS solution on Type 2230 solutions on Socket 1 Key E solutions. The GNSS signals to be added are the Tx Blanking and SYSCLK signals and it is suggested to overlay them on the SDIO RESET# and SDIO CLK respectively which are also inputs. In this way it is less likely to cause a potential contention.  show less 1.x ECN
Definition of two of the three COEX pins as a UART Tx/Rx communication path between Socket 1 and Socket 2 in favor of WWAN ßà Connectivity coexistence. The intent is to definitively define the location of the source and sink sides of the signal path.  show less 1.x ECN
The proposed change is to change the current voltage level of the NFC related signals (I2C DATA, I2C CLK and ALERT#) on the Connectivity pinouts and definitions from 3.3V to 1.8V signal level to better align with future platforms operating signal levels typical in the industry.  show less 1.x ECN
Provide specification for Physical Layer protocol aware Retimers for PCI Express 3.0/3.1. show less 3.x ECN
Section 3.1.3.2.1 is redefined to provide a more realistic timing model for reset. show less 1.x ECN
Definition of the four Audio pins to provide definitive functions assigned to each pin of the Audio interface. show less 1.x ECN
SMBus interface signals are included in sections 3.2 and 3.3 and related minor clarifications added to sections 1.2, 1.3, 2.2, 4.1, 4.2, 5.2.2, and 5.3. show less 1.x ECN
Mobile broadband peak data rates continue to increase. With LTE category 5, USB 2.0 will not meet the performance requirements. LTE category 5 peak data rates are 320 Mbps downlink; 75 Mbps uplink. Most USB 2.0 implementations achieve a maximum of about 240 Mbps throughput. Looking longer term, the ITU has set a target of 1 Gbits/s for low mobility applications for IMT Advanced. show less 2.x ECN
This ECN accomplishes two housekeeping tasks associated with DLLP encoding. show less 3.x ECN
Modifies specifications to provide revised JTOL curve for SRIS mode and provides additional frequency domain constraint of SSC profile jitter on reference clocks. show less 3.x ECN
Modify the Mini Card specification to tighten the power rail voltage tolerance. show less 2.x ECN
Modifies the limits used by the PLL bandwidth test to allow guardband for a single PLL test solution to be used at PCI-SIG compliance workshops without impacting pass/fail results for member companies. show less 3.x ECN
The M.2 form factor is used for Mobile Add-In cards. The M.2 is a natural transition from the Mini Card and Half-Mini Card to a smaller form factor in both size and volume. The M.2 is a family of form factors that will enable expansion, contraction, and higher integration of functions onto a single form factor module solution. show less 1.x Specification
This ECN affects the PCI Firmware Specification v3.1 and allows certain errors to be suppressed by platform if software stack ensured error containment. Also removes the implementation note in section 4.5.2.4 which is not representative of OS behavior. show less 3.x ECN
 Defines mechanisms to reduce the time software needs to wait before issuing a Configuration Request to a PCIe Function or RC-integrated PCI Function following power on, reset, or power state transitions. show less 3.x ECN
This specification is a companion for the PCI Express Base Specification, Revision 3.0. Its primary focus is the implementation of an evolutionary strategy with the current PCI™ desktop/server mechanical and electrical specifications. . show less 3.x Specification
This test specification primarily covers tests of PCI Express platform firmware for features critical to PCI Express. This specification does not include the complete set of tests for a PCI Express System. show less 3.x Specification
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building add-in cards or system boards to the PCI Express Card Electromechanical Specification 3.0. This specification does not describe the full set of PCI Express tests and assertions for these devices. show less 3.x Specification
This document primarily covers PCI Express testing of all defined PCI Express Device Types and RCRBs for the standard Configuration Space mechanisms, registers, and features in Chapter 6 of the PCI Local Bus Specification and Chapter 7 of the PCI Express Base Specification (some additional tested registers are described in other specifications that are referenced in the individual test description). show less 3.x Specification
This test specification primarily covers testing of PCI Express Device and Port types for compliance with the link layer and transaction layer requirements of the PCI Express Base Specification. Device and Port types that do not have a link (e.g., Root Complex Integrated Endpoints, Root Complex Event Collectors) are not tested under this test specification. At this point, this test specification does not describe the full set of PCI Express tests for all link layer or transaction layer requirements. show less 3.x Specification
 This ECR defines an optional mechanism, that establishes, depending on implementation, one or more substates of the L1 Link state, which allow for dramatically lower idle power, including near complete removal of power for high speed circuits. show less 3.x ECN
This ECR defines a new logical layer mapping of PCI Express over the MIPI Alliance M-PHY1 specification. show less 3.x ECN
Defines an optional-normative Precision Time Measurement (PTM) capability. To accomplish this, Precision Time Measurement defines a new protocol of timing measurement/synchronization messages and a new capability structure. show less 3.x ECN
Provide specifications to enable separate Refclk with Independent Spread Spectrum Clocking (SSC) architecture. show less 3.x ECN
The PCI Express 3.0 describes a method to simulate 8GT/s channel compliance using a statistical data eye simulator. To help members perform this simulation, a free open source tool called Seasim is provided below. This tool has been tested by members of the Electrical Working Group on multiple channels and has reached version 0.54 which should be useful for members designing 8GT/s systems.   show less 3.x Specification
Change the Sub-Class assignment for Root Complex Event Collector from 06h to 07h. show less 3.x ECN
This optional normative ECN defines enhancements to the Downstream Port Containment (DPC) ECN, an ECN that enabled automatic disabling of the Link below a Downstream Port following an uncorrectable error. The DPC ECN defined functionality for both Switch Downstream Ports and Root Ports. This ECN mostly defines functionality that is specific to Root Ports, functionality referred to as “RP Extensions for DPC”. show less 3.x ECN
This document primarily covers PCI Express testing of root complexes, switches, bridges, and endpoints for the standard configuration mechanisms, registers, and features in Chapter 7 of the PCI Express Base Specification, Revision 2.0. This specification does not describe the full set of PCI Express tests and assertions for these devices. show less 2.x Specification
This document primarily covers PCI Express testing of root complexes, switches, bridges, and endpoints for the standard configuration mechanisms, registers, and features in Chapter 7 of the PCI Express Base Specification, Revision 2.0. This specification does not describe the full set of PCI Express tests and assertions for these devices. show less 2.x Specification
This test specification primarily covers testing of all PCI Express Port types for compliance with the link layer requirements in Chapter 3 of the PCI Express Base Specification. At this point, this specification does not describe the full set of PCI Express tests for all link layer requirements. Going forward, as the testing gets mature, it is expected that more tests may be added as deemed necessary. show less 2.x Specification
This document provides test descriptions for PCI Express electrical testing. It is relevant for anyone building add-in cards or system boards to the PCI Express Card Electromechanical Specification, Revision 2.0. This specification does not describe the full set of PCI Express tests and assertions for these devices.  show less 2.x Specification
This is a companion specification to the PCI Express Base Specification. Its primary focus is the implementation of cabled PCI Express®. The discussions are confined to copper cabling and their connector requirements to meet the PCI Express signaling needs at 5.0 GT/s. No assumptions are made regarding the implementation of PCI Express compliant Subsystems on either side of the cabled Link; e.g., PCI Express Card Electromechanical (CEM), ExpressCard5 ™, ExpressModule™, PXI Express™, system board, or any other form factor. Such form factors are covered in other separate specifications. show less 2.x Specification
This is a companion specification to the PCI Express Base Specification. Its primary focus is the implementation of cabled PCI Express®. The discussions are confined to copper cabling and their connector requirements to meet the PCI Express signaling needs at 5.0 GT/s. No assumptions are made regarding the implementation of PCI Express compliant Subsystems on either side of the cabled Link; e.g., PCI Express Card Electromechanical (CEM), ExpressCard5 ™, ExpressModule™, PXI Express™, system board, or any other form factor. Such form factors are covered in other separate specifications. show less 2.x Specification
Modify the PCI Express Mini Card specification to define a new interface for tunable antennas. Modify the PCI Express Mini Card specification to enable existing coexistence signals to operate simultaneously with new tuneable antenna control signals. show less 2.x ECN
This specification defines an implementation for small form factor PCI Express cards. The specification uses a qualified sub-set of the same signal protocol, electrical definitions, and configuration definitions as the PCI Express Base Specification, Revision 2.0. Where this specification does not explicitly define PCI Express characteristics, the PCI Express Base Specification governs. show less 2.x Specification
This specification defines an implementation for small form factor PCI Express cards. The specification uses a qualified sub-set of the same signal protocol, electrical definitions, and configuration definitions as the PCI Express Base Specification, Revision 2.0. Where this specification does not explicitly define PCI Express characteristics, the PCI Express Base Specification governs. show less 2.x Specification
This ECN defines a new error containment mechanism for Downstream Ports as well as minor enhancements that improve asynchronous card removal. Downstream Port Containment (DPC) is the automatic disabling of the Link below a Downstream Port following an uncorrectable error. This prevents the potential spread of data corruption (all TLPs subsequent to the error are prevented from propagating either Upstream or Downstream) and enables error recovery if supported by software. show less 3.x ECN
This optional normative ECN defines a simple protocol where a device can register interest in one or more cachelines in host memory, and later be notified via a hardware mechanism when any registered cachelines are updated. show less 3.x ECN
Receivers that operate at 8.0 GT/s with an impedance other than the range defined by the ZRX-DC parameter for 2.5 GT/s (40-60 Ohms) must meet additional behavior requirements in the following LTSSM states: Polling, Rx_L0s, L1, L2, and Disabled. show less 3.x ECN
The Process Address Space ID (PASID) ECN to the Base PCI Express Specification defines the PASID TLP Prefix. This companion ECN is optional normative and defines PASID TLP Prefix usage rules for ATS and PRI. show less 1.x ECN
This optional normative ECN defines an End-End TLP Prefix for conveying additional attributes associated with a request. The PASID TLP Prefix is an End-End TLP Prefix as defined in the PCI Express Base Specification. Routing elements that support End-End TLP Prefixes (i.e. have the End-End TLP Prefix Supported bit Set in the Device Capabilities 2 register) can correctly forward TLPs containing a PASID TLP Prefix. show less 1.x ECN
This document describes the hardware independent firmware interface for managing PCI™, PCI-X, and PCI Express® systems in a host computer. show less 3.x Specification
2.x Errata
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. show less 3.x Specification
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. show less 3.x Specification
This ECR requests making a change to the CLKREQ# asserted low to clock active timing when latency tolerance reporting is supported and enabled for the function. The change would be to allow this specified value to exceed 400ns up to a limit consistent with the latency value established by the Latency Tolerance Reporting (LTR) mechanism. show less 1.x ECN
This involves a minor upward compatible change in Chapter 3, Chapter 4 and a new Appendix T. show less 3.x ECN
 This change allows for all Root Ports with the End-End TLP Prefix Supported bit Set to have different values for the Max End-End TLP Prefixes field in the Device Capabilities 2 register. It also changes and clarifies error handling for a Root Port receiving a TLP with more End-End TLP Prefixes than it supports. show less 3.x ECN
This ECN is for the functional addition of a second wireless disable signal (W_DISABLE2#) as a new definition of Pin 51 (Reserved). When this optional second wireless disable signal is not implemented by the system, the original intent of a single wireless disable signal disabling all radios on the add-in card when asserted is still required. show less 1.x ECN
A number of PCIe base specifications ECNs have been approved that require software support. In some cases, platform firmware needs to know if the OS running supports certain features, or the OS needs to be able to request control of certain features from platform firmware. In other cases, the OS needs to know information about the platform that cannot be discovered through PCI enumeration, and ACPI must be used to supply the additional information.  show less 3.x ECN
The purpose of this document is to specify PCI™ I/O virtualization and sharing technology. The specification is focused on single root topologies; e.g., a single computer that supports virtualization technology. show less 1.x Specification
The purpose of this document is to specify PCI™ I/O virtualization and sharing technology. The specification is focused on single root topologies; e.g., a single computer that supports virtualization technology. show less 1.x Specification
 ECR covers proposed modification of Section 4.2 Power Consumption within the CEM Specification version 2.0. show less 2.x ECN
Prior to this ECN, all PCIe external Links were required to support ASPM L0s. This ECN changes the Base Specification to permit ASPM L0s support to be optional unless the applicable formfactor specification explicitly requires it. show less 2.x ECN
This ECR proposes to add a new mechanism for platform central resource (RC) power state information to be communicated to Devices. This mechanism enables Optimized Buffer Flush/Fill (OBFF) by allowing the platform to indicate optimal windows for device bus mastering & interrupt activity. Devices can use internal buffering to shape traffic to fit into these optimal windows, reducing platform power impact. show less 2.x ECN
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. show less 2.x Specification
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. show less 2.x Specification
2.x Errata
This specification describes the extensions required to allow PCI Express Devices to interact with an address translation agent (TA) in or above a Root Complex (RC) to enable translations of DMA addresses to be cached in the Device. The purpose of having an Address Translation Cache (ATC) in a Device is to minimize latency and to provide a scalable distributed caching solution that will improve I/O performance while alleviating TA resource pressure. This specification must be used in conjunction with the PCI Express Base Specification, Revision 1.1, and associated ECNs. show less 1.x Specification
Emerging usage model trends indicate a requirement for increase in header size fields to provide additional information than what can be accommodated in currently defined TLP header sizes. The TLP Prefix mechanism extends the header size by adding DWORDS to the front of headers that carry additional information. show less 2.x ECN
This ECN modifies the system board transmitter path requirements (VTXS and VTXS_d) at 5 GT/s. As a consequence the minimum requirements for the add-in card receiver path sensitivity at 5 GT/s are also updated. show less 2.x ECN
This optional normative ECR defines a mechanism by which a Requester can provide hints on a per transaction basis to facilitate optimized processing of transactions that target Memory Space. The architected mechanisms may be used to enable association of system processing resources (e.g. caches) with the processing of Requests from specific Functions or enable optimized system specific (e.g. system interconnect and Memory) processing of Requests. show less 2.x ECN
The change allows a Function to use Extended Tag fields (256 unique tag values) by default; this is done by allowing the Extended Tag Enable control field to be set by default. show less 2.x ECN
This document primarily covers PCI Express testing of root complexes, switches, bridges, and endpoints for the standard configuration mechanisms, registers, and features in Chapter 7 of the PCI Express Base Specification, Revision 2.0. This specification does not describe the full set of PCI Express tests and assertions for these devices. show less 2.x Specification
This ECR proposes to add a new mechanism for Endpoints to report their service latency requirements for Memory Reads and Writes to the Root Complex such that central platform resources (such as main memory, RC internal interconnects, snoop resources, and other resources associated with the RC) can be power managed without impacting Endpoint functionality and performance. show less 2.x ECN
This document contains a list of Test Assertions and a set of Test Definitions pertaining to the Transaction Layer. Assertions are statements of spec requirements which are measured by the algorithm details as specified in the Test Definitions. “Basic Functional Tests” are Test Algorithms which perform basic tests for key elements of Transaction Layer device functionality. This document does not describe a full set of PCI Express tests and assertions and is in no way intended to measure products for full design validation. Tests described here should be viewed as tools to checkpoint the result of product validation – not as a replacement for that effort. show less 2.x Specification
This ECN proposes to add a new ordering attribute which devices may optionally support to provide enhanced performance for certain types of workloads and traffic patterns. The new ordering attribute relaxes ordering requirements between unrelated traffic by comparing the Requester/Completer IDs of the associated TLPs. show less 2.x ECN
DPA (Dynamic Power Allocation) extends existing PCIe device power management to provide active (D0) device power management substates for appropriate devices, while comprehending existing PCIe PM Capabilities including PCI-PM and Power Budgeting. show less 2.x ECN
The purpose of this document is to specify PCI® I/O virtualization and sharing technology. The specification is focused on multi-root topologies; e.g., a server blade enclosure that uses a PCI Express® Switch-based topology to connect server blades to PCI Express Devices or PCI Express to-PCI Bridges and enable the leaf Devices to be serially or simultaneously shared by one or more System Images (SI). Unlike the Single Root IOV environment, independent SI may execute on disparate processing components such as independent server blades. show less 1.x Specification
This optional normative ECN adds Multicast functionality to PCI Express by means of an Extended Capability structure for applicable Functions in Root Complexes, Switches, and components with Endpoints. The Capability structure defines how Multicast TLPs are identified and routed. It also provides means for checking and enforcing send permission with Function-level granularity. The ECN identifies Multicast errors and adds an MC Blocked TLP error to AER for reporting those errors. show less 2.x ECN
PCI Express (PCIe) defines error signaling and logging mechanisms for errors that occur on a PCIe interface and for errors that occur on behalf of transactions initiated on PCIe. It does not define error signaling and logging mechanisms for errors that occur within a component or are unrelated to a particular PCIe transaction. show less 2.x ECN
This optional ECN adds a capability for Functions with BARs to report various options for sizes of their memory mapped resources that will operate properly. Also added is an ability for software to program the size to configure the BAR to. show less 2.x ECN
This optional normative ECN defines 3 new PCIe transactions, each of which carries out a specific Atomic Operation (“AtomicOp”) on a target location in Memory Space. The 3 AtomicOps are FetchAdd (Fetch and Add), Swap (Unconditional Swap), and CAS (Compare and Swap). FetchAdd and Swap support operand sizes of 32 and 64 bits. CAS supports operand sizes of 32, 64, and 128 bits. show less 2.x ECN
The main objective of this specification is to support PCI Express® add-in cards that require higher power than specified in the PCI Express Card Electromechanical Specification and the PCI Express x16 Graphics 150W-ATX Specification. show less 1.x Specification
1.x Errata
This specification defines an implementation for small form factor PCI Express cards. The specification uses a qualified sub-set of the same signal protocol, electrical definitions, and configuration definitions as the PCI Express Base Specification, Revision 1.1. Where this specification does not explicitly define PCI Express characteristics, the PCI Express Base Specification governs. show less 1.x Specification
This specification defines an implementation for small form factor PCI Express cards. The specification uses a qualified sub-set of the same signal protocol, electrical definitions, and configuration definitions as the PCI Express Base Specification, Revision 1.1. Where this specification does not explicitly define PCI Express characteristics, the PCI Express Base Specification governs. show less 1.x Specification
The purpose of this document is to specify PCI™ I/O virtualization and sharing technology. The specification is focused on single root topologies; e.g., a single computer that supports virtualization technology. show less 1.x Specification
For virtualized and non-virtualized environments, a number of PCI-SIG member companies have requested that the current constraints on number of Functions allowed per multi-Function Device be increased to accommodate the needs of next generation I/O implementations. This ECR specifies a new method to interpret the Device Number and Function Number fields within Routing IDs, Requester IDs, and Completer IDs, thereby increasing the number of Functions that can be supported by a single Device. show less 2.x ECN
This specification is a companion for the PCI Express Base Specification, Revision 2.0. Its primary focus is the implementation of an evolutionary strategy with the current PCI™ desktop/server mechanical and electrical specifications. The discussions are confined to ATX or ATX-based form factors. Other form factors, such as PCI Express® Mini Card are covered in other separate specifications. show less 2.x Specification
This specification is a companion for the PCI Express Base Specification, Revision 2.0. Its primary focus is the implementation of an evolutionary strategy with the current PCI™ desktop/server mechanical and electrical specifications. The discussions are confined to ATX or ATX-based form factors. Other form factors, such as PCI Express® Mini Card are covered in other separate specifications. show less 2.x Specification
This specification describes the extensions required to allow PCI Express Devices to interact with an address translation agent (TA) in or above a Root Complex (RC) to enable translations of DMA addresses to be cached in the Device. The purpose of having an Address Translation Cache (ATC) in a Device is to minimize latency and to provide a scalable distributed caching solution that will improve I/O performance while alleviating TA resource pressure. This specification must be used in conjunction with the PCI Express Base Specification, Revision 1.1, and associated ECNs. show less 1.x Specification
1.x Errata
This is a companion specification to the PCI Express Base Specification. Its primary focus is the implementation of cabled PCI Express®. The discussions are confined to copper cabling and their connector requirements to meet the PCI Express signaling needs at 2.5 GT/s. No assumptions are made regarding the implementation of PCI Express compliant Subsystems on either side of the cabled Link; e.g., PCI Express Card Electromechanical (CEM), ExpressCard™, ExpressModule™, PXI Express™, system board, or any other form factor. Such form factors are covered in other separate specifications. show less 1.x Specification
This specification describes the PCI Express® architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express Specification. show less 2.x Specification
This ECN rectifies the differences between the DMTF SM CLP Specification and its supporting documents with the current PCI Firmware 3.0 Specification. Also, it clarifies the supporting documents required for successfully implementing CLP in an X86 PCI FW 3.0 compatible option ROM. show less 3.x ECN
This ECN attempts to make clarifications such that the system firmware and multiple operating systems can have the same interpretation of the specification and work interoperably. show less 3.x ECN
This ECN adds a function to the _DSM Definitions for PCI to provide an indication to an operating system that it can ignore the PCI boot configuration setup by the firmware during system initialization.  show less 3.x ECN
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
This specification is a companion for the PCI Express Base Specification, Revision 1.1. Its primary focus is the implementation of an evolutionary strategy with the current PCI desktop/server mechanical and electrical specifications. The discussions are confined to ATX or ATX-based form factors. Other form factors, such as PCI Express Mini Card are covered in other separate specifications. show less 1.x Specification
This specification is a companion for the PCI Express Base Specification, Revision 1.1. Its primary focus is the implementation of an evolutionary strategy with the current PCI desktop/server mechanical and electrical specifications. The discussions are confined to ATX or ATX-based form factors. Other form factors, such as PCI Express Mini Card are covered in other separate specifications. show less 1.x Specification
This specification describes the PCI Express architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express specification. show less 1.x Specification
This specification describes the PCI Express architecture, interconnect attributes, fabric management, and the programming interface required to design and build systems and peripherals that are compliant with the PCI Express specification. show less 1.x Specification
This document is a companion specification to the PCI Express Base Specification. Its primary focus is the implementation of a modular I/O form factor that is focused on the needs of workstations and servers from mechanicals and electrical requirements. The discussions are confined to the modules and their chassis slots requirements. Other form factors are covered in other separate specifications. show less 1.x Specification
The objectives of this specification are Support for PCI Express™ graphics add-in cards that are higher power than specified in the PCI Express Card Electromechanical Specification, Forward looking for future scalability, Allow evolution of the PC architecture including graphics, Upgradeability, and Enhanced end user experience. show less 1.x Specification
This addendum to the PCI Express Base 1.0a describes a low power extension intended primarily to support the reduced power requirements of mobile platforms. Its scope is restricted to the electrical layer and corresponds to Section 4.3 of PCI Express Base 1.0a. show less 1.x Specification
This specification describes the PCI Express to PCI/PCI-X bridge (also referred to herein as PCI Express bridge) architecture, interface requirements, and the programming model. show less 1.x Specification
Technology: PCI Firmware
This ECN defines a _DSM interface for the OS to learn whether the platform supports generation of synchronous processor exceptions. If firmware has granted the OS control of the DPC Capability (see _OSC Control[7]) and the platform supports generation of synchronous processor exceptions for an RP PIO error type, the OS may set the corresponding RP PIO Exception bit. If the OS does not have control of the DPC Capability, it may read the RP PIO Exception Register to learn how the platform will report RP PIO errors. show less 3.x ECN
Update the PCI Firmware Specification to remove offensive terms and adopt inclusive language. This follows a similar update that was implemented in the PCI Base specification. show less 3.x ECN
This ECN extends the functionality provided by the TPH Features _DSM introduced in Revision 3.3 of the PCI Firmware Specification. show less 3.x ECN
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
Changes are requested to be made to Section 4.5.1, _OSC Interface for PCI Host Bridge Devices and Section 4.5.2.4 Dependencies Between _OSC Control Bits. The changes will enable the Operating System and system firmware to negotiate ownership of the System Firmware Intermediary (SFI) Extended Capability Structure. show less 3.x ECN
This ECN adds new capabilities by way of adding new Device Specific Method (_DSM) to the PCI Firmware Spec. show less 3.x ECN
Since the 3.2 release of the Firmware Specification, several ECRs have been submitted which add _DSM functions or modify existing Functions. The Revision value for each added/modified Function has been included in each ECR, but the value has been applied in a manner inconsistent with previous revisions of the Specification. show less 3.x ECN
Changes are requested to clarify Section 4.6.5. There is a lot of confusion about where the _DSM object should be located and what the function 5 means. This change Notice proposes no functional changes. show less 3.x ECN
This ECN is based on the TLP Processing Hint (TPH) optional capability. The Steering Tag (ST) field handling is platform specific, and this ECN provides a model for how a device driver can determine if the platform root complex supports decode of Steering Tags for specific vendor handling. show less 3.x ECN
Changes are requested to be made to Section 4.5.1, _OSC Interface for PCI Host Bridge Devices and Section 4.5.2.4 Dependencies Between _OSC Control Bits. The changes will enable the Operating System to advertise if it is capable of support _HPX PCI Express Descriptor Setting Record (Type 3) to the firmware. It also enables the Operating System and the Firmware to negotiate ownership of the PCIe Completion Timeout registers.  show less 3.x ECN
The changes effect the PCI Firmware Specification, Revision 3.2 and will enable the Operating System to advertise its Downstream Port containment related capabilities to the firmware. It also enables the Operating System and the Firmware to negotiate ownership of Downstream Port Containment extended capability register block and collaboratively manage Downstream Port Containment events. show less 3.x ECN
The changes affect the PCI Firmware Specification, Revision 3.2 and enable the MCFG table format to allow for multiple memory mapped base address entries, instances of Table 4-3, per single PCI Segment Group.  show less 3.x ECN
This ECN adds two capabilities by way of adding functions to the PCI Firmware Spec defined _DSM definition.  show less 3.x ECN
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer.  show less 3.x Specification
This ECN affects the PCI Firmware Specification v3.1 and allows certain errors to be suppressed by platform if software stack ensured error containment. Also removes the implementation note in section 4.5.2.4 which is not representative of OS behavior. show less 3.x ECN
This document describes the hardware independent firmware interface for managing PCI™, PCI-X, and PCI Express® systems in a host computer. show less 3.x Specification
A number of PCIe base specifications ECNs have been approved that require software support. In some cases, platform firmware needs to know if the OS running supports certain features, or the OS needs to be able to request control of certain features from platform firmware. In other cases, the OS needs to know information about the platform that cannot be discovered through PCI enumeration, and ACPI must be used to supply the additional information. show less 3.x ECN
This ECN rectifies the differences between the DMTF SM CLP Specification and its supporting documents with the current PCI Firmware 3.0 Specification. Also, it clarifies the supporting documents required for successfully implementing CLP in an X86 PCI FW 3.0 compatible option ROM. show less 3.x ECN
This ECN attempts to make clarifications such that the system firmware and multiple operating systems can have the same interpretation of the specification and work interoperably. show less 3.x ECN
This ECN adds a function to the _DSM Definitions for PCI to provide an indication to an operating system that it can ignore the PCI boot configuration setup by the firmware during system initialization.  show less 3.x ECN
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
Technology: PCI Conventional
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
Changes are requested to be made to Section 4.5.1, _OSC Interface for PCI Host Bridge Devices and Section 4.5.2.4 Dependencies Between _OSC Control Bits. The changes will enable the Operating System to advertise if it is capable of support _HPX PCI Express Descriptor Setting Record (Type 3) to the firmware. It also enables the Operating System and the Firmware to negotiate ownership of the PCIe Completion Timeout registers. show less 3.x ECN
The changes effect the PCI Firmware Specification, Revision 3.2 and will enable the Operating System to advertise its Downstream Port containment related capabilities to the firmware. It also enables the Operating System and the Firmware to negotiate ownership of Downstream Port Containment extended capability register block and collaboratively manage Downstream Port Containment events. show less 3.x ECN
The changes affect the PCI Firmware Specification, Revision 3.2 and enable the MCFG table format to allow for multiple memory mapped base address entries, instances of Table 4-3, per single PCI Segment Group. show less 3.x ECN
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
Enhanced Allocation is an optional Conventional PCI Capability that may be implemented by Functions to indicate fixed (non reprogrammable) I/O and memory ranges assigned to the Function, as well as supporting new resource “type” definitions and future extensibility to also support reprogrammable allocations. show less 3.x ECN
This ECR provides two additional ACPI DSM functions to inform OS about the possible time reduction opportunities: show less 3.x ECN
This ECN affects the PCI Firmware Specification v3.1 and allows certain errors to be suppressed by platform if software stack ensured error containment. Also removes the implementation note in section 4.5.2.4 which is not representative of OS behavior. show less 3.x ECN
Update the references to the latest UEFI Specification. Make clarifications in 5.1.2 that the pointers to the Device List, the Configuration Utility Code header and the DMTF CLP Entry Point are not applicable to the UEFI Option ROMs. show less 3.x ECN
This ECN updates the subclass ID description in the Class Code & Capability ID Specification. show less 3.x ECN
This document describes the hardware independent firmware interface for managing PCI™, PCI-X, and PCI Express® systems in a host computer. show less 3.x Specification
This ECN extracts the Class Code definitions from Appendix D and the Capability ID definitions from Appendix H, for consolidation into a new standalone document that’s easier to maintain. The new document will also consolidate Extended Capability definitions from the PCIe Base spec and various other PCIe specs. show less 3.x ECN
This ECR proposes a mechanism (new extension to _DSM) to make the device names/labels under Operating Systems deterministic. Currently, there is no well defined mechanism to consistently associate platform specific device names and instances of a device type under operating system. As a result, instance labels for specific device types under various operating systems (ex: ethx label for networking device instance under Linux OS) do not always map into the platform designated device labels. Additionally, the instance labels can change based on the system configuration. For example, under Linux operating systems, the “eth0” label does not necessarily map to the first embedded networking device as designed in a given platform. Depending on the hardware bus topology, current configuration including the number and type of networking adapters installed, the eth0 label assignment could change in a given platform. show less 3.x ECN
A number of PCIe base specifications ECNs have been approved that require software support. In some cases, platform firmware needs to know if the OS running supports certain features, or the OS needs to be able to request control of certain features from platform firmware. In other cases, the OS needs to know information about the platform that cannot be discovered through PCI enumeration, and ACPI must be used to supply the additional information. show less 3.x ECN
This update references to sections in the DMTF Server Management Command Line Protocol (SM CLP) Specification (DSP0214) to match the most recent version SM CLP Specification (v1.0.2). It also clarifies that the references to the DMTF SM CLP specification are referencing v1.0.2. and adds a reference to the DMTF SM CLP specification in section 1.2 Reference Documents. show less 3.x ECN
This is a request to update the UEFI PCI Services. No functional changes. In the case of the UGA reference, UGA has been obsolete by the UEFI Specification and is replaced by the new GOP. show less 3.x ECN
This ECN allows the unoccupied slots' power to be off at hand-off, which is a reasonable implementation and many systems implement that way today. show less 3.x ECN
This ECN rectifies the differences between the DMTF SM CLP Specification and its supporting documents with the current PCI Firmware 3.0 Specification. Also, it clarifies the supporting documents required for successfully implementing CLP in an X86 PCI FW 3.0 compatible option ROM. show less 3.x ECN
For conventional PCI devices integrated into a PCI Express Root Complex, this defines an optional capability structure that includes selected advanced features originally defined for PCI Express. This capability is intended to be extensible in the future. For the initial definition, the Transactions Pending (TP) and Function Level Reset (FLR) are included. show less 3.x ECN
3.x Errata
This ECN attempts to make clarifications such that the system firmware and multiple operating systems can have the same interpretation of the specification and work interoperably. show less 3.x ECN
This ECN adds a function to the _DSM Definitions for PCI to provide an indication to an operating system that it can ignore the PCI boot configuration setup by the firmware during system initialization. show less 3.x ECN
This document describes the hardware independent firmware interface for managing PCI, PCI-X, and PCI Express™ systems in a host computer. show less 3.x Specification
This ECN is a request for modifications to the paragraphs describing the Interrupt Line Register usage for the PCI-to-PCI-Bridge. The purpose is to clarify the differences between the usages on PC-compatible systems and DIG64-compliant systems. show less 1.x ECN
The functional changes proposed involve the definition of a new Capabilities List ID (and associated Capability register set). This new Capabilities ID will identify to system firmware (BIOS/OROM), a Serial ATA (SATA) host bus adapter’s (HBA) support of optional features that may be defined in the particular SATA HBA specification (i.e. Advanced Host Controller Interface - AHCI). show less 3.x ECN
The goal of this specification is to establish a standard set of PCI peripheral power management hardware interfaces and behavioral policies. Once established, this infrastructure enables an operating system to intelligently manage the power of PCI functions and buses. show less 1.x Specification
This document contains the formal specifications of the protocol, electrical, and mechanical features of the PCI Local Bus Specification, Revision 3.0, as the production version effective February 3, 2004. The PCI Local Bus Specification, Revision 2.3, issued March 29, 2002, is not superseded by this specification. show less 3.x Specification
This document contains the formal specifications of the protocol, electrical, and mechanical features of the PCI Local Bus Specification, Revision 3.0, as the production version effective February 3, 2004. The PCI Local Bus Specification, Revision 2.3, issued March 29, 2002, is not superseded by this specification. show less 3.x Specification
2.x Errata
Create a new class code for SerialATA host-based adapters (HBAs) that can be identified by system software. The new class code will allow for system software to identify a controller as being attached to serial ATA devices and serial attached SCSI devices. This will help system software load drivers that may be specific to these interfaces. show less 3.x ECN
Extend the current MSI functionality to support a larger number of MSI vectors, plus a separate and independent Message Address and Message Data for each MSI vector. Allow more flexibility when SW allocates fewer MSI vectors than requested by HW. Enable per-vector masking capability. show less 2.x ECN
Extend the current MSI functionality to support a larger number of MSI vectors, plus a separate and independent Message Address and Message Data for each MSI vector. Allow more flexibility when SW allocates fewer MSI vectors than requested by HW. Enable per-vector masking capability. show less 3.x ECN
This specification defines the behavior of a compliant PCI-to-PCI bridge. A PCI-to-PCI bridge that conforms to this specification and the PCI Local Bus Specification is a compliant implementation. Compliant bridges may differ from each other in performance and, to some extent, functionality. show less 1.x Specification
This specification defines the behavior of a compliant PCI-to-PCI bridge. A PCI-to-PCI bridge that conforms to this specification and the PCI Local Bus Specification is a compliant implementation. Compliant bridges may differ from each other in performance and, to some extent, functionality. show less 1.x Specification
Restrict PCI Standard Hot-Plug (SHPC) device drivers to a memory access granularity of maximum one DWORD (aligned) when reading or writing to the SHPC memory space. show less 1.x ECN
Changes are to the PCI Standard Hot-Plug Controller and Subsystem Specification, Revision 1.0. This ECN extends the Standard Hot-plug Controller Specification to support the additional PCI-X speeds and modes allowed by the PCI-X 2.0 specification. Specifically, this ECN provides the required hardware and software extensions needed to support the new PCI-X 2.0 speeds of 266 and 533 and also the software extensions needed to control PCI-X 2.0 mode ECC and parity operation. show less 1.x ECN
The intent of this ECR is to update the PCI base specifications to include PCI connector metallurgical practices which have been commonly accepted or introduced since the original wording was drafted before the PCI 2.0 specification. With an overwhelming majority of PCI and PCI-X connectors shipped in the world that do not meet the PCI specification for contact finish plating, the most efficient way to rectify the situation is to correct the specification. show less 2.x ECN
The intent of this ECR is to update the PCI base specifications to include PCI connector metallurgical practices which have been commonly accepted or introduced since the original wording was drafted before the PCI 2.0 specification. With an overwhelming majority of PCI and PCI-X connectors shipped in the world that do not meet the PCI specification for contact finish plating, the most efficient way to rectify the situation is to correct the specification. show less 3.x ECN
This document contains the formal specifications of the protocol, electrical, and mechanical features of the PCI Local Bus Specification, Revision 2.3, as the production version effective March 29, 2002. The PCI Local Bus Specification, Revision 2.2, issued December 18, 1998, is superseded by this specification. show less 2.x Specification
The primary objective of this specification is to enable higher availability of file and application servers by standardizing key aspects of the process of removing and installing PCI add-in cards while the system is running. Although these same principles can be applied to desktop and portable systems using PCI buses, the operations described here target server platforms. show less 1.x Specification
The primary purpose of this document is to specify a standard implementation of a PCI Hot-Plug Controller called the Standard Hot-Plug Controller (SHPC). show less 1.x Specification
The Mini PCI Specification defines an alternate implementation for small form factor PCI cards referred to in this specification as a Mini PCI Card. This specification uses a qualified sub-set of the same signal protocol, electrical definitions, and configuration definitions as the PCI Local Bus Specification. show less 1.x Specification
This document describes the software interface presented by the PCI BIOS functions. This interface provides a hardware independent method of managing PCI devices in a host computer. show less 2.x Specification

pci bus number assignment

Copyright © 2024 PCI-SIG. All rights reserved.    View our privacy policy .    Contact Us .

PCI Code and ID Assignment Specification Revision 1.11 24 Jan 2019

Total Page: 16

File Type: pdf , Size: 1020Kb

  • Abstract and Figures
  • Public Full-text

PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

Revision Revision History Date 1.0 Initial release. 9/9/2010 1.1 Incorporated approved ECNs. 3/15/2012 1.2 Incorporated ECN for Accelerator Class code, added PI for xHCI. 3/15/2012 Updated section 1.2, Base Class 01h, Sub-class 00h by adding 1.3 9/4/2012 Programming Interfaces 11h, 12h, 13h, and 21h. Added Notes 3, 4, and 5. Updated Section 1.2, Base Class 01h, to add Sub-class 09h. Updated Section 1.9, Base Class 08h to add Root Complex Event Collector, Sub-class 07h 1.4 Updated Section 1 and added Section 1.20, to define Base Class 13h. 8/29/2013 Updated Chapter 3 to define Extended Capability IDs 001Dh through 0022h. Reformatted Notes in Sections 1.2 and 1.7 through 1.10. Updated references to NVM Express in Section 1.9, Base Class 08h Updated Section 1.2, to clarify SOP entries in Base Class 01h, add proper reference to NVMHCI, update UFS entries, and address other minor 1.5 3/6/2014 editorial issues. Updated Section 3, Extended Capability ID descriptions 19h, 1Ch, 1Fh. Updated Section 1.3, Class 02h, to add Sub-Class 08h. 1.6 Updated Section 1.14, Base Class 0Dh, to add Sub-Classes 40h and 41h. 12/9/2014 Updated Section 2 to add Capability ID 14h. Added Designated Vendor-Specific Extended Capability ID. 1.7 Updated/Modified Section 1.5, Base Class 04h, for Multimedia devices to 8/13/2015 accurately reflect use of this class for High Definition Audio (HD-A). Small edits. Added Extended Capability IDs for:  VF Resizable BAR 1.8 9/1/2016  Data Link Feature  Physical Layer 16.0 GT/s  Lane Margining at the Receiver Added the Hierarchy ID Extended Capability ID. 1.9 Added the Flattening Portal Bridge Capability ID. 5/18/2017 Added Class/Sub-Class/PI for I3C Host Controller. Added NPEM 1.10 11/8/2017 New legal boilerplate language (p.3) Added: Physical Layer 32.0 GT/s Extended Capability ID Alternate Protocol Extended Capability ID System Firmware Intermediary Extended Capability ID 1.11 1/24/2019 Fixed errata re: name of “TPH Requester” Extended Capability Added Programming Interface for NVM Express (NVMe) administrative controller and related text changes Assorted editorial corrections and enhancements

PCI-SIG® disclaims all warranties and liability for the use of this document and the information contained herein and assumes no responsibility for any errors that may appear in this document, nor does PCI-SIG make a commitment to update the information contained herein. Contact the PCI-SIG office to obtain the latest revision of this specification. Questions regarding the PCI Code and ID Assignment Specification or membership in PCI-SIG may be forwarded to: Membership Services www.pcisig.com E-mail: [email protected] Phone: 503-619-0569 Fax: 503-644-6708

Technical Support [email protected]

This specification is the sole property of PCI-SIG® and provided under a click through license through its website, www.pci-sig.com. PCI-SIG disclaims all warranties and liability for the use of this document and the information contained herein and assumes no responsibility for any errors that may appear in this document, nor does PCI-SIG make a commitment to update the information contained herein.

This PCI Specification is provided “as is” without any warranties of any kind, including any warranty of merchantability, non-infringement, fitness for any particular purpose, or any warranty otherwise arising out of any proposal, specification, or sample. PCI-SIG disclaims all liability for infringement of proprietary rights, relating to use of information in this specification. This document itself may not be modified in any way, including by removing the copyright notice or references to PCI-SIG. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted herein. PCI, PCI Express, PCIe, and PCI-SIG are trademarks or registered trademarks of PCI-SIG. All other product names are trademarks, registered trademarks, or servicemarks of their respective owners.

Copyright © PCI-SIG 2019. All Rights Reserved.

3 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

OBJECTIVE OF THE SPECIFICATION ...... 6 REFERENCE DOCUMENTS ...... 6 DOCUMENTATION CONVENTIONS...... 6 TERMS AND ACRONYMS ...... 7 1. CLASS CODES ...... 8 1.1. BASE CLASS 00H ...... 9 1.2. BASE CLASS 01H ...... 9 1.3. BASE CLASS 02H ...... 11 1.4. BASE CLASS 03H ...... 11 1.5. BASE CLASS 04H ...... 12 1.6. BASE CLASS 05H ...... 12 1.7. BASE CLASS 06H ...... 13 1.8. BASE CLASS 07H ...... 14 1.9. BASE CLASS 08H ...... 15 1.10. BASE CLASS 09H ...... 16 1.11. BASE CLASS 0AH ...... 16 1.12. BASE CLASS 0BH ...... 17 1.13. BASE CLASS 0CH ...... 18 1.14. BASE CLASS 0DH ...... 19 1.15. BASE CLASS 0EH ...... 19 1.16. BASE CLASS 0FH ...... 19 1.17. BASE CLASS 10H ...... 20 1.18. BASE CLASS 11H ...... 20 1.19. BASE CLASS 12H ...... 20 1.20. BASE CLASS 13H ...... 21 2. CAPABILITY IDS ...... 22 3. EXTENDED CAPABILITY IDS ...... 24

4 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

TABLE 2-1: CAPABILITY IDS...... 22 TABLE 3-1: EXTENDED CAPABILITY IDS ...... 24

5 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

Objective of the Specification

This specification contains the Class Code and Capability ID descriptions originally contained the PCI Local Bus Specification, bringing them into a standalone document that is easier to reference and maintain. This specification also consolidates Extended Capability ID assignments from the PCI Express Base Specification and various other PCI specifications.

Reference Documents

PCI Express Base Specification PCI Local Bus Specification PCI-X Protocol Addendum to the PCI Local Bus Specification

Documentation Conventions

Capitalization Some terms are capitalized to distinguish their definition in the context of this document from their common English meaning. Words not capitalized have their common English meaning. When terms such as “memory write” or “memory read” appear completely in lower case, they include all transactions of that type. Register names and the names of fields and bits in registers and headers are presented with the first letter capitalized and the remainder in lower case. Numbers and Number Bases Hexadecimal numbers are written with a lower case “h” suffix, e.g., FFFh and 80h. Hexadecimal numbers larger than four digits are represented with a space dividing each group of four digits, as in 1E FFFF FFFFh. Binary numbers are written with a lower case “b” suffix, e.g., 1001b and 10b. Binary numbers larger than four digits are written with a space dividing each group of four digits, as in 1000 0101 0010b. All other numbers are decimal.

6 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

Terms and Acronyms

Base Class The upper byte of a Class Code, which broadly classifies the type of functionality that the device Function provides. Capability ID An eight-bit value that identifies the type and format of a PCI-Compatible Capability structure. See the PCI Local Bus Specification. Class Code A three-byte field in a Function’s Configuration Space header that identifies the generic functionality of the Function, and in some cases, a specific Programming Interface. See the PCI Local Bus Specification. Extended Capability ID A sixteen-bit value that identifies the type and format of an Extended Capability structure. See the PCI Express Base Specification. Programming Interface The lower byte of a Class Code, which identifies the specific register-level interface (if any) of a device Function, so that device-independent software can interact with the device. Sub-Class The middle byte of a Class Code, which more specifically identifies the type of functionality that the device Function provides. Vendor-Specific Behavior defined by the manufacturer identified by the Vendor ID field in the PCI Capability Header (Configuration Space offset 00h).

7 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1 1. Class Codes

This chapter describes the current Class Code encodings. This list may be enhanced at any time. The PCI-SIG web site contains the latest version of this specification. Companies wishing to define a new encoding should contact the PCI-SIG. All unspecified values are reserved for PCI-SIG assignment.

Base Class Meaning 00h Device was built before Class Code definitions were finalized 01h Mass storage controller 02h Network controller 03h Display controller 04h Multimedia device 05h Memory controller 06h Bridge device 07h Simple communication controllers 08h Base system peripherals 09h Input devices 0Ah Docking stations 0Bh Processors 0Ch Serial bus controllers 0Dh Wireless controller 0Eh Intelligent I/O controllers 0Fh Satellite communication controllers 10h Encryption/Decryption controllers 11h Data acquisition and signal processing controllers 12h Processing accelerators 13h Non-Essential Instrumentation 14h - FEh Reserved FFh Device does not fit in any defined classes

1 The content of this chapter was originally in Appendix D of the PCI Local Bus Specification, Revision 3.0.

8 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.1. Base Class 00h

This base class is defined to provide backward compatibility for devices that were built before the Class Code field was defined. No new devices should use this value and existing devices should switch to a more appropriate value if possible. For class codes with this base class value, there are two defined values for the remaining fields as shown in the table below. All other values are reserved.

Programming Base Class Sub-Class Meaning Interface All currently implemented devices except VGA- 00h 00h 00h compatible devices 01h 00h VGA-compatible device

1.2. Base Class 01h

This base class is defined for all types of mass storage controllers. Several sub-class values are defined.

Programming Base Class Sub-Class Meaning Interface 00h SCSI controller - vendor-specific interface SCSI storage device (e.g., hard disk drive (HDD), solid state drive (SSD), or RAID controller) - SCSI over PCI 11h Express (SOP) target port using PCI Express Queuing Interface (PQI) (see Notes 3 and 4) SCSI controller (i.e., host bus adapter) - SCSI over PCI 12h Express (SOP) target port using PCI Express Queuing 00h Interface (PQI) (see Notes 3 and 4) SCSI storage device and SCSI controller - SCSI over 13h PCI Express (SOP) target port using PCI Express 01h Queuing Interface (PQI) (see Notes 3 and 4) SCSI storage device - SCSI over PCI Express (SOP) 21h target port using the queueing interface portion of the NVM Express interface (see Notes 3 and 6) 01h xxh IDE controller (see Note 1) 02h 00h Floppy disk controller - vendor-specific interface 03h 00h IPI bus controller - vendor-specific interface 04h 00h RAID controller - vendor-specific interface Table continues on the following page

9 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

Programming Base Class Sub-Class Meaning Interface ATA controller with ADMA interface - single stepping 20h (see Note 2) 05h ATA controller with ADMA interface - continuous 30h operation (see Note 2) 00h Serial ATA controller - vendor-specific interface 06h 01h Serial ATA controller - AHCI interface (see note 7) 02h Serial Storage Bus Interface Serial Attached SCSI (SAS) controller - vendor-specific 00h 07h interface 01h Obsolete Non-volatile memory subsystem - vendor-specific 00h 01h interface Non-volatile memory subsystem - NVMHCI interface 01h 08h (see note 8) 02h NVM Express (NVMe) I/O controller (see Note 6) NVM Express (NVMe) administrative controller (see 03h Note 6) Universal Flash Storage (UFS) controller - vendor- 00h specific interface 09h Universal Flash Storage (UFS) controller - Universal 01h Flash Storage Host Controller Interface (UFSHCI) (see Note 5) 80h 00h Other mass storage controller - vendor-specific interface

Notes: 1. Register interface conforms to the PCI Compatibility and PCI-Native Mode Bus interface defined in ANSI INCITS370-2004: ATA Host Adapters Standard (see http://www.incits.org and http://www.t13.org). 2. Register interface conforms to the ADMA interface defined in ANSI INCITS 370-2004: ATA Host Adapters Standard (see http://www.incits.org and http://www.t13.org). 3. Conforms to the SCSI over PCI Express (SOP) standard (ISO/IEC 14776-271) (see http://www.incits.org and http://www.t10.org). 4. Conforms to PCI Express Queuing Interface (PQI) standard (ISO/IEC 14776-171) (see http://www.incits.org and http://www.t10.org). 5 Conforms to JESD223a (see http://www.jedec.org/standards-documents/docs/jesd223a). 6 Conforms to the NVM Express Specification (see http://www.nvmexpress.org). 7 Conforms to the AHCI Specification (see http://www.intel.com). 8 Conforms to the NVMHCI Specification (see http://www.nvmexpress.org).

10 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.3. Base Class 02h

This base class is defined for all types of network controllers. Several sub-class values are defined.

Programming Base Class Sub-Class Meaning Interface 00h 00h Ethernet controller 01h 00h Token Ring controller 02h 00h FDDI controller 03h 00h ATM controller 04h 00h ISDN controller 02h 05h 00h WorldFip controller xxh (see Note 1 06h PICMG 2.14 Multi Computing below) 07h 00h InfiniBand* Controller 08h 00h Host fabric controller – vendor-specific 80h 00h Other network controller Notes: 1. For information on the use of this field see the PICMG 2.14 Multi Computing Specification (http://www.picmg.com).

1.4. Base Class 03h

This base class is defined for all types of display controllers. For VGA devices (Sub-Class 00h), the Programming Interface byte is divided into a bit field that identifies additional video controller compatibilities. A device can support multiple interfaces by using the bit map to indicate which interfaces are supported. For the XGA devices (Sub-Class 01h), only the standard XGA interface is defined. Sub-Class 02h is for controllers that have hardware support for 3D operations and are not VGA compatible.

Programming Base Class Sub-Class Meaning Interface VGA-compatible controller. Memory addresses 0A 0000h through 0B FFFFh. I/O addresses 3B0h to 0000 0000b 3BBh and 3C0h to 3DFh and all aliases of these 00h addresses. 8514-compatible controller. I/O addresses 2E8h and 03h 0000 0001b its aliases, 2EAh-2EFh 01h 00h XGA controller 02h 00h 3D controller 80h 00h Other display controller

11 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.5. Base Class 04h

This base class is defined for all types of multimedia devices. Several sub-class values are defined.

Programming Base Class Sub-Class Meaning Interface 00h 00h Video device – vendor specific interface 01h 00h Audio device – vendor specific interface 02h 00h Computer telephony device – vendor specific interface High Definition Audio (HD-A) 1.0 compatible (see 04h 00h Note 1) 03h High Definition Audio (HD-A) 1.0 compatible (see 80h Note 1) with additional vendor specific extensions 80h 00h Other multimedia device – vendor specific interface

Notes: 1. The High Definition Audio Specification is available here: http://www.intel.com/content/www/us/en/standards/standards-high-def-audio-specs-general-technology.html

1.6. Base Class 05h

This base class is defined for all types of memory controllers. Several sub-class values are defined. There are no Programming Interfaces defined.

Programming Base Class Sub-Class Meaning Interface 00h 00h RAM 05h 01h 00h Flash 80h 00h Other memory controller

12 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.7. Base Class 06h

This base class is defined for all types of bridge devices. A PCI bridge is any PCI device that maps PCI resources (memory or I/O) from one side of the device to the other. Several sub-class values are defined.

Programming Base Class Sub-Class Meaning Interface 00h 00h Host bridge 01h 00h ISA bridge 02h 00h EISA bridge 03h 00h MCA bridge 00h PCI-to-PCI bridge Subtractive Decode PCI-to-PCI bridge. This interface 04h code identifies the PCI-to-PCI bridge as a device that 01h supports subtractive decoding in addition to all the currently defined functions of a PCI-to-PCI bridge. 05h 00h PCMCIA bridge 06h 00h NuBus bridge

06h 07h 00h CardBus bridge 08h xxh RACEway bridge (see Note 1 below) Semi-transparent PCI-to-PCI bridge with the primary 40h PCI bus side facing the system host processor 09h Semi-transparent PCI-to-PCI bridge with the 80h secondary PCI bus side facing the system host processor 0Ah 00h InfiniBand-to-PCI host bridge Advanced Switching to PCI host bridge–Custom 00h Interface 0Bh Advanced Switching to PCI host bridge– 01h ASI-SIG Defined Portal Interface 80h 00h Other bridge device Notes: 1. RACEway is an ANSI standard (ANSI/VITA 5-1994) switching fabric. For the Programming Interface bits, [7:1] are reserved, read-only, and return zeros. Bit 0 defines the operation mode and is read-only: 0 - Transparent mode 1 - End-point mode

13 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.8. Base Class 07h

This base class is defined for all types of simple communications controllers. Several sub-class values are defined, some of these having specific well-known Programming Interfaces.

Programming Base Class Sub-Class Meaning Interface 00h Generic XT-compatible serial controller 01h 16450-compatible serial controller 02h 16550-compatible serial controller 00h 03h 16650-compatible serial controller 04h 16750-compatible serial controller 05h 16850-compatible serial controller 06h 16950-compatible serial controller 00h Parallel port 01h Bi-directional parallel port 01h 02h ECP 1.X compliant parallel port 03h IEEE1284 controller FEh IEEE1284 target device (not a controller) 07h 02h 00h Multiport serial controller 00h Generic modem Hayes compatible modem, 16450-compatible 01h interface (see Note 1 below) Hayes compatible modem, 16550-compatible 02h 03h interface (see Note 1 below) Hayes compatible modem, 16650-compatible 03h interface (see Note 1 below) Hayes compatible modem, 16750-compatible 04h interface (see Note 1 below) 04h 00h GPIB (IEEE 488.1/2) controller 05h 00h Smart Card 80h 00h Other communications device Notes: 1. For Hayes-compatible modems, the first Base Address register (at offset 10h) maps the appropriate compatible (i.e., 16450, 16550, etc.) register set for the serial controller at the beginning of the mapped space. Note that these registers can be either memory or I/O mapped depending what kind of BAR is used.

14 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.9. Base Class 08h

This base class is defined for all types of generic system peripherals. Several sub-class values are defined, most of these having a specific well-known Programming Interface.

Programming Base Class Sub-Class Meaning Interface 00h Generic 8259 PIC 01h ISA PIC 00h 02h EISA PIC 10h I/O APIC interrupt controller (see Note 1 below) 20h I/O(x) APIC interrupt controller 00h Generic 8237 DMA controller 01h 01h ISA DMA controller 02h EISA DMA controller 00h Generic 8254 system timer 08h 01h ISA system timer 02h 02h EISA system timers (two timers) 03h High Performance Event Timer 00h Generic RTC controller 03h 01h ISA RTC controller 04h 00h Generic PCI Hot-Plug controller 05h 00h SD Host controller 06h 00h IOMMU 07h 00h Root Complex Event Collector (see Note 2 below) 80h 00h Other system peripheral Notes: 1 For I/O APIC Interrupt Controller, the Base Address register at offset 10h is used to request a minimum of 32 bytes of non-prefetchable memory. Two registers within that space are located at Base+00h (I/O Select register) and Base+10h (I/O Window register). 2 Some versions of the PCI Express Base Specification defined Root Complex Event Collectors to use Sub- class 06h. Implementations are permitted to use Sub-class 06h for this purpose, but this practice is strongly discouraged. The Device/Port Type field value can be used to accurately identify all Root Complex Event Collectors.

15 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.10. Base Class 09h

This base class is defined for all types of input devices. Several sub-class values are defined. A Programming Interface is defined for gameport controllers.

Programming Base Class Sub-Class Meaning Interface 00h 00h Keyboard controller 01h 00h Digitizer (pen) 02h 00h Mouse controller 09h 03h 00h Scanner controller 00h Gameport controller (generic) 04h 10h Gameport controller (see Note 1 below) 80h 00h Other input controller Notes: 1 A gameport controller with a Programming Interface == 10h indicates that any Base Address registers in this Function that request/assign I/O address space, the registers in that I/O space conform to the standard “legacy” game ports. The byte at offset 00h in an I/O region behaves as a legacy gameport interface where reads to the byte return joystick/gamepad information, and writes to the byte start the RC timer. The byte at offset 01h is an alias of the byte at offset 00h. All other bytes in an I/O region are unspecified and can be used in vendor unique ways.

1.11. Base Class 0Ah

This base class is defined for all types of docking stations. No specific Programming Interfaces are defined.

Programming Base Class Sub-Class Meaning Interface 00h 00h Generic docking station 0Ah 80h 00h Other type of docking station

16 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.12. Base Class 0Bh

This base class is defined for all types of processors. Several sub-class values are defined corresponding to different processor types or instruction sets. There are no specific Programming Interfaces defined.

Programming Base Class Sub-Class Meaning Interface 00h 00h 386 01h 00h 486 02h 00h Pentium 10h 00h Alpha 0Bh 20h 00h PowerPC 30h 00h MIPS 40h 00h Co-processor 80h 00h Other processors

17 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.13. Base Class 0Ch

This base class is defined for all types of serial bus controllers. Several sub-class values are defined.

Programming Base Class Sub-Class Meaning Interface 00h IEEE 1394 (FireWire) 00 10h IEEE 1394 following the 1394 OpenHCI specification 01h 00h ACCESS.bus 02h 00h SSA Universal Serial Bus (USB) following the Universal 00h Host Controller Specification Universal Serial Bus (USB) following the Open Host 10h Controller Specification USB 2 host controller following the Intel Enhanced 20h Host Controller Interface Specification 03h Universal Serial Bus (USB) Host Controller following 30h the Intel eXtensible Host Controller Interface (xHCI) Specification Universal Serial Bus with no specific Programming 80h Interface FEh USB device (not host controller) 0Ch 04h 00h Fibre Channel 05h 00h SMBus ( System Management Bus ) InfiniBand–This sub-class is deprecated. New 06h 00h InfiniBand adapters should use the base class and sub-class defined in Section 1.3 .

07h 00h IPMI SMIC Interface (see Note 1 01h IPMI Keyboard Controller Style Interface below) 02h IPMI Block Transfer Interface 08h (see Note 2 00h SERCOS Interface Standard (IEC 61491) below) 09h 00h CANbus 0Ah (see Note 3 00h MIPI I3CSM Host Controller Interface below) 80h 00h Other Serial Bus Controllers Notes: 1. The register interface definitions for the Intelligent Platform Management Interface (Sub-Class 07h) are in the IPMI specification. 2. There is no register level definition for the SERCOS Interface standard. For more information see IEC 61491. 3. The MIPI I3C Host Controller Interface specification is(upon publication) available at http://software.mipi.org.

18 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.14. Base Class 0Dh

This base class is defined for all types of wireless controllers. Several sub-class values are defined.

Programming Base Class Sub-Class Meaning Interface 00 00h iRDA compatible controller 00h Consumer IR controller 01h 10h UWB Radio controller 10h 00h RF controller 11h 00h Bluetooth 0Dh 12h 00h Broadband 20h 00h Ethernet (802.11a – 5 GHz) 21h 00h Ethernet (802.11b – 2.4 GHz) 40h 00h Cellular controller/modem 41h 00h Cellular controller/modem plus Ethernet (802.11) 80h 00h Other type of wireless controller

1.15. Base Class 0Eh

This base class is defined for intelligent I/O controllers. The primary characteristic of this base class is that the I/O function provided follows some sort of generic definition for an I/O controller.

Programming Base Class Sub-Class Meaning Interface xxh Intelligent I/O (I2O) Architecture Specification 1.0 0Eh 00 00h Message FIFO at offset 040h

1.16. Base Class 0Fh

This base class is defined for satellite communication controllers. Several sub-class values are defined. There are no Programming Interfaces defined.

Programming Base Class Sub-Class Meaning Interface 01h 00h TV 02h 00h Audio 0Fh 03h 00h Voice 04h 00h Data 80h 00h Other satellite communication controller

19 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.17. Base Class 10h

This base class is defined for all types of encryption and decryption controllers. Several sub-class values are defined. There are no Programming Interfaces defined.

Programming Base Class Sub-Class Meaning Interface Network and computing encryption and decryption 00h 00h controller 10h 10h 00h Entertainment encryption and decryption controller 80h 00h Other encryption and decryption controller

1.18. Base Class 11h

This base class is defined for all types of data acquisition and signal processing controllers. Several sub-class values are defined. There are no Programming Interfaces defined.

Programming Base Class Sub-Class Meaning Interface 00h 00h DPIO modules 01h 00h Performance counters Communications synchronization plus time and 11h 10h 00h frequency test/measurement 20h 00h Management card 80h 00h Other data acquisition/signal processing controllers

1.19. Base Class 12h

This base class is defined for processing accelerators. No sub-classes or Programming Interfaces are defined.

Programming Base Class Sub-Class Meaning Interface 12h 00h 00h Processing Accelerator – vendor-specific interface

20 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

1.20. Base Class 13h

This base class is defined for Functions that provide component/platform instrumentation capabilities not essential to normal run-time operation. Examples include instrumentation or debug capabilities used in development, or by authorized users. It is intended that a system might implement differentiated policies for Functions with this base class, for example, a policy of silently ignoring cases where no device driver matches the Function (vs. the typical default of notifying the user).

Programming Base Class Sub-Class Meaning Interface Non-Essential Instrumentation Function – Vendor- 13h 00h 00h specific interface

21 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

2 2. Capability IDs

This chapter describes the current PCI-Compatible Capability IDs. Each Capability structure must have a Capability ID assigned by the PCI-SIG. Companies wishing to define a new encoding should contact the PCI-SIG. All unspecified values are reserved for PCI-SIG assignment. Table 2-1: Capability IDs

ID Capability 00h Null Capability – This capability contains no registers other than those described below. It may be present in any Function. Functions may contain multiple instances of this capability. The Null Capability is 16 bits and contains an 8-bit Capability ID followed by an 8-bit Next Capability Pointer. 01h PCI Power Management Interface – This Capability structure provides a standard interface to control power management features in a device Function. It is fully documented in the PCI Bus Power Management Interface Specification. 02h AGP – This Capability structure identifies a controller that is capable of using Accelerated Graphics Port features. Full documentation can be found in the Accelerated Graphics Port Interface Specification. 03h VPD – This Capability structure identifies a device Function that supports Vital Product Data. Full documentation of this feature can be found in the PCI Local Bus Specification. 04h Slot Identification – This Capability structure identifies a bridge that provides external expansion capabilities. Full documentation of this feature can be found in the PCI-to-PCI Bridge Architecture Specification. 05h Message Signaled Interrupts – This Capability structure identifies a device Function that can do message signaled interrupt delivery. Full documentation of this feature can be found in the PCI Local Bus Specification. 06h CompactPCI Hot Swap – This Capability structure provides a standard interface to control and sense status within a device that supports Hot Swap insertion and extraction in a CompactPCI system. This Capability is documented in the CompactPCI Hot Swap Specification PICMG 2.1, R1.0 available at http://www.picmg.org. 07h PCI-X – Refer to the PCI-X Protocol Addendum to the PCI Local Bus Specification for details.

2 The content of this chapter was originally in Appendix H of the PCI Local Bus Specification, Revision 3.0.

22 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

ID Capability 08h HyperTransport – This Capability structure provides control and status for devices that implement HyperTransport Technology links. For details, refer to the HyperTransport I/O Link Specification available at http://www.hypertransport.org. 09h Vendor Specific – This Capability structure allows device vendors to use the Capability mechanism to expose vendor-specific registers. The byte immediately following the Next Pointer in the Capability structure is defined to be a length field. This length field provides the number of bytes in the Capability structure (including the Capability ID and Next Pointer bytes). All remaining bytes in the capability structure are vendor- specific. 0Ah Debug port 0Bh CompactPCI central resource control – Definition of this Capability can be found in the PICMG 2.13 Specification (http://www.picmg.com). 0Ch PCI Hot-Plug – This Capability ID indicates that the associated device conforms to the Standard Hot-Plug Controller model. 0Dh PCI Bridge Subsystem Vendor ID 0Eh AGP 8x 0Fh Secure Device 10h PCI Express 11h MSI-X – This Capability ID identifies an optional extension to the basic MSI functionality. 12h Serial ATA Data/Index Configuration 13h Advanced Features (AF) – Full documentation of this feature can be found in the Advanced Capabilities for Conventional PCI ECN. 14h Enhanced Allocation 15h Flattening Portal Bridge Others Reserved

23 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

3. Extended Capability IDs

This chapter describes the current Extended Capability IDs. Each Extended Capability structure must have an Extended Capability ID assigned by the PCI-SIG. Unless otherwise noted, each Extended Capability ID is defined in the PCI Express Base Specification. Companies wishing to define a new encoding should contact the PCI-SIG. All unspecified values are reserved for PCI-SIG assignment.

Table 3-1: Extended Capability IDs

ID Extended Capability Null Capability – This capability contains no registers other than those in the Extended Capability Header. It may be present in any Function. Functions may contain multiple instances of this capability. 0000h The Null Extended Capability is 32 bits and contains only an Extended Capability Header. The Capability Version field of a Null Extended Capability is not meaningful and may contain any value. 0001h Advanced Error Reporting (AER) Virtual Channel (VC) – used if an MFVC Extended Cap structure is not 0002h present in the device 0003h Device Serial Number 0004h Power Budgeting 0005h Root Complex Link Declaration 0006h Root Complex Internal Link Control 0007h Root Complex Event Collector Endpoint Association 0008h Multi-Function Virtual Channel (MFVC) Virtual Channel (VC) – used if an MFVC Extended Cap structure is 0009h present in the device 000Ah Root Complex Register Block (RCRB) Header 000Bh Vendor-Specific Extended Capability (VSEC) Configuration Access Correlation (CAC) – defined by the Trusted 000Ch Configuration Space (TCS) for PCI Express ECN, which is no longer supported 000Dh Access Control Services (ACS) 000Eh Alternative Routing-ID Interpretation (ARI) 000Fh Address Translation Services (ATS) 0010h Single Root I/O Virtualization (SR-IOV)

24 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11

ID Extended Capability Multi-Root I/O Virtualization (MR-IOV) – defined in the Multi-Root I/O 0011h Virtualization and Sharing Specification 0012h Multicast 0013h Page Request Interface (PRI) 0014h Reserved for AMD 0015h Resizable BAR 0016h Dynamic Power Allocation (DPA) 0017h TPH Requester 0018h Latency Tolerance Reporting (LTR) 0019h Secondary PCI Express 001Ah Protocol Multiplexing (PMUX) 001Bh Process Address Space ID (PASID) 001Ch LN Requester (LNR) 001Dh Downstream Port Containment (DPC) 001Eh L1 PM Substates 001Fh Precision Time Measurement (PTM) 0020h PCI Express over M-PHY (M-PCIe) 0021h FRS Queueing 0022h Readiness Time Reporting 0023h Designated Vendor-Specific Extended Capability 0024h VF Resizable BAR 0025h Data Link Feature 0026h Physical Layer 16.0 GT/s 0027h Lane Margining at the Receiver 0028h Hierarchy ID 0029h Native PCIe Enclosure Management (NPEM) 002Ah Physical Layer 32.0 GT/s 002Bh Alternate Protocol 002Ch System Firmware Intermediary (SFI) Others Reserved

  • File_Allocation_Table
  • SATA_Express
  • System_Management_Bus
  • Apple_Desktop_Bus
  • Creator_code
  • Direct_Media_Interface
  • Tomsk Tourism
  • Tomsk Hotels
  • Tomsk Bed and Breakfast
  • Flights to Tomsk
  • Tomsk Restaurants
  • Things to Do in Tomsk
  • Tomsk Travel Forum
  • Tomsk Photos
  • All Tomsk Hotels
  • Tomsk Hotel Deals

Buses between Tayga and Tomsk - Tomsk Forum

  • Europe    
  • Russia    
  • Siberian District    
  • Tomsk Oblast    
  • Tomsk    

Buses between Tayga and Tomsk

  • United States Forums
  • Europe Forums
  • Canada Forums
  • Asia Forums
  • Central America Forums
  • Africa Forums
  • Caribbean Forums
  • Mexico Forums
  • South Pacific Forums
  • South America Forums
  • Middle East Forums
  • Honeymoons and Romance
  • Business Travel
  • Train Travel
  • Traveling With Disabilities
  • Tripadvisor Support
  • Solo Travel
  • Bargain Travel
  • Timeshares / Vacation Rentals
  • Tomsk Oblast forums
  • Tomsk forum

pci bus number assignment

Hi everyone,

it's easy enough to find details of trains between Tayga and Tomsk, but as they are very few and far between, does anyone have a link to a bus schedule?

Or has anyone had experience of taking shared taxis or similar?

Many thanks in advance.

5 replies to this topic

Automobile road takes a long detour via Kemerovo so have a closer look at train schedule http://goo.gl/dAjs5j

Avidclam, thank you very much, that's brilliant information.

Much appreciated!

Tripadvisor staff removed this post because it did not meet Tripadvisor's forum posting guidelines with prohibiting self-promotional advertising or solicitation.

No buses between taiga and Tomsk at all. only trains about 2-3 hours period

Many thanks for your help.

  • Student life and job in Tomsk for foreign national May 13, 2024
  • Student life and job in Tomsk for foreign national Aug 25, 2021
  • Things to do in Tomsk? Jul 27, 2020
  • Nicest neighborhood? Nov 22, 2018
  • student life Aug 07, 2018
  • Bus/Taxi from Novosibirsk to Tomsk and back again Aug 08, 2017
  • Visa Registration, Need help :) Jan 18, 2016
  • Tour guides in tomsk Dec 12, 2015
  • Time differences Jun 10, 2015
  • Buses between Tayga and Tomsk Dec 31, 2014
  • busses from novosibirsk to Tomsk and back Dec 25, 2014
  • Niznevartovsk to Tomsk Jul 11, 2012
  • Tomsk Oblast Jan 02, 2011
  • Niznevartovsk to Tomsk 4 replies
  • Tomsk Oblast 4 replies

Tomsk Hotels and Places to Stay

  • GreenLeaders

IMAGES

  1. Electronic

    pci bus number assignment

  2. Configuring PCI-PCI Bridges

    pci bus number assignment

  3. Configuring PCI-PCI Bridges

    pci bus number assignment

  4. PCIE原理-003_1:PCIE是如何获取BUS number的_pcie bus number-CSDN博客

    pci bus number assignment

  5. 1.1.3.1.1.2.PCI Bus Write Cycle

    pci bus number assignment

  6. Bus Specifics (Writing Device Drivers)

    pci bus number assignment

VIDEO

  1. BPAS 186 solved assignment 2024-25 || bpas 186 solved assignment 2025 in English || ignou bpas186

  2. Are you truly serving God?

  3. PCI Bus Protocol

  4. Sandi Dinan Locomotion (1989) Remake (Pacific Data Images)

  5. Fix error Co-doc. number assignment not possible for bus. trans. COIN in co area in SAP MM

  6. Computer Architecture

COMMENTS

  1. PDF PCI Code and ID Assignment Specification

    The PCI-SIG web site contains the latest version of this specification. Companies wishing to define a new encoding should contact the PCI-SIG. All unspecified values are reserved for PCI-SIG assignment. 1 The content of this chapter was originally in Appendix D of the PCI Local Bus Specification, Revision 3.0.

  2. Understand the Primary/Secondary/Subordinate Bus number in PCI/PCIe

    The secondary bus number is the one that is directly downstream of the port, secondary bus number plus one through the subordinate bus number are then buses that exist below that port somewhere. PCIe TLP routing logic would send any packets with bus numbers that fall into that range out of the port.

  3. Configuring PCI-PCI Bridges

    PCI-PCI Bridge Numbering: Step 2. Linux uses a depthwise algorithm and so the initialisation code goes on to scan PCI Bus 1. Here it finds PCI-PCI . There are no further PCI-PCI bridges beyond PCI-PCI , so it is assigned a subordinate bus number of 2 which matches the number assigned to its secondary interface.

  4. Chapter 6 PCI

    It is the last bridge on this branch and so it is assigned a subordinate bus interface number of 4. The initialisation code returns to PCI-PCI Bridge 3 and assigns it a subordinate bus number of 4. Finally, the PCI initialisation code can assign 4 as the subordinate bus number for PCI-PCI Bridge 1. Figure 6.9 on page pageref shows the final bus ...

  5. PCI device address actually means slot address? And when does PCIe slot

    This is rather different between PCI and PCI express. However, the bus and device numbers are not explicitly configured. For PCI, the device number is determined by which AD line is tied to the IDSEL input. Hence the device number is determined by the physical slot in which the card is installed, determined by the board layout.

  6. PDF PCI Code and ID Assignment Specification Revision 1.11

    Contact the PCI-SIG office to obtain the latest revision of this specification. Questions regarding the PCI Code and ID Assignment Specification or membership in PCI-SIG may be forwarded to: Membership Services www.pcisig.com E-mail: [email protected] Phone: 503-619-0569 Fax: 503-644-6708. Technical Support [email protected].

  7. PCI Code and ID Assignment Specification Revision 1.12 9 Jan 2020

    1 The content of this chapter was originally in Appendix D of the PCI Local Bus Specification, Revision 3.0. 9 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.12 . 1.1. Base Class 00h . This base class is defined to provide backward compatibility for devices that were built before the Class Code field was defined.

  8. How is the device determined in PCI enumeration? (bus/device/function)

    The answer is a bit confusing: 1) in PCI "device number" actually means "slot number" (and it makes sense), 2) you say "PCIe is totally different" and "since each device has its own independent set of wires, the device IDs are essentially all hard-coded", which means the set of wires (= the slot) has the ID hard-coded to it, thus it is the same as in PCI.

  9. PDF PCI hotplug: movable BARs and bus numbers

    Sergei Miroshnichenko <[email protected]> PCI hotplug: movable BARs and bus numbers. Assigning windows example 4/5. A window set with the leftmost xed BAR is assigned rst, starting from the downstream root port; Next are windows below, leading to xed BARs; Then movable and new BARs; The second set must start at the beginning of its rst ...

  10. PDF PCI Bus Demystified

    and a large number of parallel wires. The computer bus also solved a marketing problem. After all, there's no point in mass producing computers unless you can sell them. A single company possesses limited expertise and resources Introducing the Peripheral Component Interconnect (PCI) Bus C H A P T E R 1

  11. In-class: PCI Enumeration

    In-class: PCI Enumeration. This assignment explores the 32-bit PCI bus, creating a utility in xv6 to list PCI devices and their parameters. ... For example, since the root PCI bus is always 0, if there is a bus 1 there must be a PCI bridge on bus 0 with secondary bus number 1. You can recursively enumerate devices on the PCI bus by scanning bus ...

  12. PCI configuration space

    This automatic device discovery and address space assignment is how plug and play is implemented. If a PCI-to-PCI bridge is found, the system must assign the secondary PCI bus beyond the bridge a bus number other than zero, and then enumerate the devices on that secondary bus. If more PCI bridges are found, the discovery continues recursively ...

  13. PCI Code and ID Assignment Specifications

    This specification contains the Class Code and Capability ID descriptions originally contained the PCI Local Bus Specification, bringing them into a standalone document that is easier to reference and maintain. ... This specification also consolidates Extended Capability ID assignments from the PCI Express Base Specification and various other ...

  14. Peripheral Component Interconnect (PCI)

    1. Peripheral Component Interconnect (PCI) : PCI is a computer bus to connect the hardware devices in a computer system. Conventional PCI is the other name for PCI. PCI is designed as a parallel bus and has a single bus clock which allocates the time quantum. PCI was introduced in the year 1992 by Intel. The standard width of a PCI bus is either 32

  15. 12. PCI Drivers

    Each PCI peripheral is identified by a bus number, a device number, and a function number. The PCI specification permits a single system to host up to 256 buses, but because 256 buses are not sufficient for many large systems, Linux now supports PCI domains.Each PCI domain can host up to 256 buses.

  16. cpu

    PCI. The Conventional PCI bus (henceforward PCI) is a designed around the bus topology: a shared bus is used to connect all the devices.. To create more complex hierarchies some devices can operate as bridge: a bridge connects a PCI bus to another, secondary, bus. The secondary bus can be another PCI bus (the device is called a PCI-to-PCI bridge, henceforward P2P) or a bus of a different type ...

  17. Bus:Device.Function (BDF) Notation

    PCI Bus number in hexadecimal, often padded using a leading zeros to two or four digits; A colon (:) ... Explicit assignment of virtual functions. Examples that explicitly assign virtual functions: Multi-Function Notation Physical >0000:00:1d.2=0-0=2 >0000:00:1d.2 0000:00:1d.1

  18. PDF PCI-to-PCI Bridge Architecture Specification

    This specification defines the behavior of a compliant PCI-to-PCI bridge. A PCI-to-PCI bridge that conforms to this specification and the PCI Local Bus Specification is a compliant implementation. Compliant bridges may differ from each other in performance and to some extent functionality. Related Documents

  19. Tomsk city, Russia travel guide

    Tomsk - Overview. Tomsk is a city in Russia located in the east of Western Siberia on the banks of the Tom River, the administrative center of Tomsk Oblast. The population of Tomsk is about 570,800 (2022), the area - 295 sq. km. The phone code - +7 3822, the postal codes - 634000-634538. Local time in Tomsk city is September 5, 9:06 pm (+7 UTC).

  20. Tomsk Forum, Travel Discussion for Tomsk, Russia

    Travel forums for Tomsk. Discuss Tomsk travel with Tripadvisor travelers

  21. PDF Specifications

    The primary objectives of this External Cable Specif... view more The primary objectives of this External Cable Specification for PCI Express 5.0 and 6.0 document are to provide • 32.0 GT/s and 64.0 GT/s electrical specifications for mated cable assembly and mated cable connector based on SFF-TA-1032 Specification, • specifications of sideband functions for sideband pins allocated in the ...

  22. Tomsk

    Bus #118 (RUB16) operates between the airport and the city centre. By bus . Buses are the most convenient way to reach Tomsk from Novosibirsk (4.5 hours, RUB800, 12 per day). Tickets can be purchased online. By train . There is a rail branch from the Trans-Siberian Railway junction at Taiga, 2.5 hours from Tomsk by train. Get around

  23. PCI Code and ID Assignment Specification Revision 1.11 24 Jan 2019

    1 The content of this chapter was originally in Appendix D of the PCI Local Bus Specification, Revision 3.0. 8 PCI CODE AND ID ASSIGNMENT SPECIFICATION, REV. 1.11 . 1.1. Base Class 00h . This base class is defined to provide backward compatibility for devices that were built before the Class Code field was defined.

  24. Buses between Tayga and Tomsk

    Answer 1 of 5: Hi everyone, it's easy enough to find details of trains between Tayga and Tomsk, but as they are very few and far between, does anyone have a link to a bus schedule? Or has anyone had experience of taking shared taxis or similar? Many...