Intel® Microarchitecture Code Named Haswell X Events

This section provides reference for hardware events that can be monitored for the CPU(s):

  • Intel® Xeon® E5/E7 v3 processor
  • EventName Description
    ARITH.DIVIDER_UOPS Any uop executed by the Divider. (This includes all divide uops, sqrt, ...)
    AVX_INSTS.ALL Note that a whole rep string only counts AVX_INST.ALL once.
    BACLEARS.ANY Number of front end re-steers due to BPU misprediction.
    BR_INST_EXEC.ALL_BRANCHES Counts all near executed branches (not necessarily retired).
    BR_INST_EXEC.ALL_CONDITIONAL Speculative and retired macro-conditional branches.
    BR_INST_EXEC.ALL_DIRECT_JMP Speculative and retired macro-unconditional branches excluding calls and indirects.
    BR_INST_EXEC.ALL_DIRECT_NEAR_CALL Speculative and retired direct near calls.
    BR_INST_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET Speculative and retired indirect branches excluding calls and returns.
    BR_INST_EXEC.ALL_INDIRECT_NEAR_RETURN Speculative and retired indirect return branches.
    BR_INST_EXEC.NONTAKEN_CONDITIONAL Not taken macro-conditional branches.
    BR_INST_EXEC.TAKEN_CONDITIONAL Taken speculative and retired macro-conditional branches.
    BR_INST_EXEC.TAKEN_DIRECT_JUMP Taken speculative and retired macro-conditional branch instructions excluding calls and indirects.
    BR_INST_EXEC.TAKEN_DIRECT_NEAR_CALL Taken speculative and retired direct near calls.
    BR_INST_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET Taken speculative and retired indirect branches excluding calls and returns.
    BR_INST_EXEC.TAKEN_INDIRECT_NEAR_CALL Taken speculative and retired indirect calls.
    BR_INST_EXEC.TAKEN_INDIRECT_NEAR_RETURN Taken speculative and retired indirect branches with return mnemonic.
    BR_INST_RETIRED.ALL_BRANCHES Branch instructions at retirement.
    BR_INST_RETIRED.ALL_BRANCHES_PS All (macro) branch instructions retired.
    BR_INST_RETIRED.CONDITIONAL Counts the number of conditional branch instructions retired.
    BR_INST_RETIRED.CONDITIONAL_PS Conditional branch instructions retired.
    BR_INST_RETIRED.FAR_BRANCH Number of far branches retired.
    BR_INST_RETIRED.NEAR_CALL Direct and indirect near call instructions retired.
    BR_INST_RETIRED.NEAR_CALL_PS Direct and indirect near call instructions retired.
    BR_INST_RETIRED.NEAR_CALL_R3 Direct and indirect macro near call instructions retired (captured in ring 3).
    BR_INST_RETIRED.NEAR_CALL_R3_PS Direct and indirect macro near call instructions retired (captured in ring 3).
    BR_INST_RETIRED.NEAR_RETURN Counts the number of near return instructions retired.
    BR_INST_RETIRED.NEAR_RETURN_PS Return instructions retired.
    BR_INST_RETIRED.NEAR_TAKEN Number of near taken branches retired.
    BR_INST_RETIRED.NEAR_TAKEN_PS Taken branch instructions retired.
    BR_INST_RETIRED.NOT_TAKEN Counts the number of not taken branch instructions retired.
    BR_MISP_EXEC.ALL_BRANCHES Counts all near executed branches (not necessarily retired).
    BR_MISP_EXEC.ALL_CONDITIONAL Speculative and retired mispredicted macro conditional branches.
    BR_MISP_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET Mispredicted indirect branches excluding calls and returns.
    BR_MISP_EXEC.NONTAKEN_CONDITIONAL Not taken speculative and retired mispredicted macro conditional branches.
    BR_MISP_EXEC.TAKEN_CONDITIONAL Taken speculative and retired mispredicted macro conditional branches.
    BR_MISP_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET Taken speculative and retired mispredicted indirect branches excluding calls and returns.
    BR_MISP_EXEC.TAKEN_INDIRECT_NEAR_CALL Taken speculative and retired mispredicted indirect calls.
    BR_MISP_EXEC.TAKEN_RETURN_NEAR Taken speculative and retired mispredicted indirect branches with return mnemonic.
    BR_MISP_RETIRED.ALL_BRANCHES Mispredicted branch instructions at retirement.
    BR_MISP_RETIRED.ALL_BRANCHES_PS This event counts all mispredicted branch instructions retired. This is a precise event.
    BR_MISP_RETIRED.CONDITIONAL Mispredicted conditional branch instructions retired.
    BR_MISP_RETIRED.CONDITIONAL_PS Mispredicted conditional branch instructions retired.
    BR_MISP_RETIRED.NEAR_TAKEN Number of near branch instructions retired that were taken but mispredicted.
    BR_MISP_RETIRED.NEAR_TAKEN_PS number of near branch instructions retired that were mispredicted and taken.
    CPL_CYCLES.RING0 Unhalted core cycles when the thread is in ring 0.
    CPL_CYCLES.RING0_TRANS Number of intervals between processor halts while thread is in ring 0.
    CPL_CYCLES.RING123 Unhalted core cycles when the thread is not in ring 0.
    CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE Count XClk pulses when this thread is unhalted and the other thread is halted.
    CPU_CLK_THREAD_UNHALTED.REF_XCLK Increments at the frequency of XCLK (100 MHz) when not halted.
    CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).
    CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE Count XClk pulses when this thread is unhalted and the other thread is halted.
    CPU_CLK_UNHALTED.REF_TSC This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state.
    CPU_CLK_UNHALTED.REF_XCLK Reference cycles when the thread is unhalted. (counts at 100 MHz rate)
    CPU_CLK_UNHALTED.REF_XCLK_ANY Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).
    CPU_CLK_UNHALTED.THREAD This event counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling.
    CPU_CLK_UNHALTED.THREAD_ANY Core cycles when at least one thread on the physical core is not in halt state.
    CPU_CLK_UNHALTED.THREAD_P Counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling.
    CPU_CLK_UNHALTED.THREAD_P_ANY Core cycles when at least one thread on the physical core is not in halt state.
    CYCLE_ACTIVITY.CYCLES_L1D_PENDING Cycles with pending L1 data cache miss loads. Set Cmask=8 to count cycle.
    CYCLE_ACTIVITY.CYCLES_L2_PENDING Cycles with pending L2 miss loads. Set Cmask=2 to count cycle.
    CYCLE_ACTIVITY.CYCLES_LDM_PENDING Cycles with pending memory loads. Set Cmask=2 to count cycle.
    CYCLE_ACTIVITY.CYCLES_NO_EXECUTE This event counts cycles during which no instructions were executed in the execution stage of the pipeline.
    CYCLE_ACTIVITY.STALLS_L1D_PENDING Execution stalls due to L1 data cache miss loads. Set Cmask=0CH.
    CYCLE_ACTIVITY.STALLS_L2_PENDING Number of loads missed L2.
    CYCLE_ACTIVITY.STALLS_LDM_PENDING This event counts cycles during which no instructions were executed in the execution stage of the pipeline and there were memory instructions pending (waiting for data).
    DSB2MITE_SWITCHES.PENALTY_CYCLES Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles.
    DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK Misses in all TLB levels that cause a page walk of any page size.
    DTLB_LOAD_MISSES.PDE_CACHE_MISS DTLB demand load misses with low part of linear-to-physical address translation missed.
    DTLB_LOAD_MISSES.STLB_HIT Number of cache load STLB hits. No page walk.
    DTLB_LOAD_MISSES.STLB_HIT_2M This event counts load operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walks.
    DTLB_LOAD_MISSES.STLB_HIT_4K This event counts load operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walks.
    DTLB_LOAD_MISSES.WALK_COMPLETED Completed page walks in any TLB of any page size due to demand load misses.
    DTLB_LOAD_MISSES.WALK_COMPLETED_1G Load miss in all TLB levels causes a page walk that completes. (1G)
    DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M Completed page walks due to demand load misses that caused 2M/4M page walks in any TLB levels.
    DTLB_LOAD_MISSES.WALK_COMPLETED_4K Completed page walks due to demand load misses that caused 4K page walks in any TLB levels.
    DTLB_LOAD_MISSES.WALK_DURATION This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB load misses.
    DTLB_STORE_MISSES.MISS_CAUSES_A_WALK Miss in all TLB levels causes a page walk of any page size (4K/2M/4M/1G).
    DTLB_STORE_MISSES.PDE_CACHE_MISS DTLB store misses with low part of linear-to-physical address translation missed.
    DTLB_STORE_MISSES.STLB_HIT Store operations that miss the first TLB level but hit the second and do not cause page walks.
    DTLB_STORE_MISSES.STLB_HIT_2M This event counts store operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walks.
    DTLB_STORE_MISSES.STLB_HIT_4K This event counts store operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walks.
    DTLB_STORE_MISSES.WALK_COMPLETED Completed page walks due to store miss in any TLB levels of any page size (4K/2M/4M/1G).
    DTLB_STORE_MISSES.WALK_COMPLETED_1G Store misses in all DTLB levels that cause completed page walks. (1G)
    DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M Completed page walks due to store misses in one or more TLB levels of 2M/4M page structure.
    DTLB_STORE_MISSES.WALK_COMPLETED_4K Completed page walks due to store misses in one or more TLB levels of 4K page structure.
    DTLB_STORE_MISSES.WALK_DURATION This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB store misses.
    EPT.WALK_CYCLES Cycle count for an Extended Page table walk.
    FP_ASSIST.ANY Cycles with any input/output SSE* or FP assists.
    FP_ASSIST.SIMD_INPUT Number of SIMD FP assists due to input values.
    FP_ASSIST.SIMD_OUTPUT Number of SIMD FP assists due to output values.
    FP_ASSIST.X87_INPUT Number of X87 FP assists due to input values.
    FP_ASSIST.X87_OUTPUT Number of X87 FP assists due to output values.
    HLE_RETIRED.ABORTED Number of times an HLE execution aborted due to any reasons (multiple categories may count as one).
    HLE_RETIRED.ABORTED_MISC1 Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).
    HLE_RETIRED.ABORTED_MISC2 Number of times an HLE execution aborted due to uncommon conditions.
    HLE_RETIRED.ABORTED_MISC3 Number of times an HLE execution aborted due to HLE-unfriendly instructions.
    HLE_RETIRED.ABORTED_MISC4 Number of times an HLE execution aborted due to incompatible memory type.
    HLE_RETIRED.ABORTED_MISC5 Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts).
    HLE_RETIRED.ABORTED_PS Number of times an HLE execution aborted due to any reasons (multiple categories may count as one).
    HLE_RETIRED.COMMIT Number of times an HLE execution successfully committed.
    HLE_RETIRED.START Number of times an HLE execution started.
    ICACHE.HIT Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches.
    ICACHE.IFDATA_STALL Cycles where a code fetch is stalled due to L1 instruction-cache miss.
    ICACHE.IFETCH_STALL Cycles where a code fetch is stalled due to L1 instruction-cache miss.
    ICACHE.MISSES This event counts Instruction Cache (ICACHE) misses.
    IDQ.ALL_DSB_CYCLES_4_UOPS Counts cycles DSB is delivered four uops. Set Cmask = 4.
    IDQ.ALL_DSB_CYCLES_ANY_UOPS Counts cycles DSB is delivered at least one uops. Set Cmask = 1.
    IDQ.ALL_MITE_CYCLES_4_UOPS Counts cycles MITE is delivered four uops. Set Cmask = 4.
    IDQ.ALL_MITE_CYCLES_ANY_UOPS Counts cycles MITE is delivered at least one uop. Set Cmask = 1.
    IDQ.DSB_CYCLES Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path.
    IDQ.DSB_UOPS Increment each cycle. # of uops delivered to IDQ from DSB path. Set Cmask = 1 to count cycles.
    IDQ.EMPTY Counts cycles the IDQ is empty.
    IDQ.MITE_ALL_UOPS Number of uops delivered to IDQ from any path.
    IDQ.MITE_CYCLES Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path.
    IDQ.MITE_UOPS Increment each cycle # of uops delivered to IDQ from MITE path. Set Cmask = 1 to count cycles.
    IDQ.MS_CYCLES This event counts cycles during which the microcode sequencer assisted the Front-end in delivering uops. Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder. Using other instructions, if possible, will usually improve performance.
    IDQ.MS_DSB_CYCLES Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequenser (MS) is busy.
    IDQ.MS_DSB_OCCUR Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequenser (MS) is busy.
    IDQ.MS_DSB_UOPS Increment each cycle # of uops delivered to IDQ when MS_busy by DSB. Set Cmask = 1 to count cycles. Add Edge=1 to count # of delivery.
    IDQ.MS_MITE_UOPS Increment each cycle # of uops delivered to IDQ when MS_busy by MITE. Set Cmask = 1 to count cycles.
    IDQ.MS_SWITCHES Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.
    IDQ.MS_UOPS This event counts uops delivered by the Front-end with the assistance of the microcode sequencer. Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder. Using other instructions, if possible, will usually improve performance.
    IDQ_UOPS_NOT_DELIVERED.CORE This event count the number of undelivered (unallocated) uops from the Front-end to the Resource Allocation Table (RAT) while the Back-end of the processor is not stalled. The Front-end can allocate up to 4 uops per cycle so this event can increment 0-4 times per cycle depending on the number of unallocated uops. This event is counted on a per-core basis.
    IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE This event counts the number cycles during which the Front-end allocated exactly zero uops to the Resource Allocation Table (RAT) while the Back-end of the processor is not stalled. This event is counted on a per-core basis.
    IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.
    IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled.
    IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE Cycles with less than 2 uops delivered by the front end.
    IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE Cycles with less than 3 uops delivered by the front end.
    ILD_STALL.IQ_FULL Stall cycles due to IQ is full.
    ILD_STALL.LCP This event counts cycles where the decoder is stalled on an instruction with a length changing prefix (LCP).
    INST_RETIRED.ANY This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. INST_RETIRED.ANY is counted by a designated fixed counter, leaving the programmable counters available for other events. Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.
    INST_RETIRED.ANY_P Number of instructions at retirement.
    INST_RETIRED.PREC_DIST Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution.
    INST_RETIRED.X87 This is a non-precise version (that is, does not use PEBS) of the event that counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handling.
    INT_MISC.RECOVERY_CYCLES This event counts the number of cycles spent waiting for a recovery after an event such as a processor nuke, JEClear, assist, hle/rtm abort etc.
    INT_MISC.RECOVERY_CYCLES_ANY Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke).
    ITLB.ITLB_FLUSH Counts the number of ITLB flushes, includes 4k/2M/4M pages.
    ITLB_MISSES.MISS_CAUSES_A_WALK Misses in ITLB that causes a page walk of any page size.
    ITLB_MISSES.STLB_HIT ITLB misses that hit STLB. No page walk.
    ITLB_MISSES.STLB_HIT_2M ITLB misses that hit STLB (2M).
    ITLB_MISSES.STLB_HIT_4K ITLB misses that hit STLB (4K).
    ITLB_MISSES.WALK_COMPLETED Completed page walks in ITLB of any page size.
    ITLB_MISSES.WALK_COMPLETED_1G Store miss in all TLB levels causes a page walk that completes. (1G)
    ITLB_MISSES.WALK_COMPLETED_2M_4M Completed page walks due to misses in ITLB 2M/4M page entries.
    ITLB_MISSES.WALK_COMPLETED_4K Completed page walks due to misses in ITLB 4K page entries.
    ITLB_MISSES.WALK_DURATION This event counts cycles when the page miss handler (PMH) is servicing page walks caused by ITLB misses.
    L1D.REPLACEMENT This event counts when new data lines are brought into the L1 Data cache, which cause other lines to be evicted from the cache.
    L1D_PEND_MISS.FB_FULL Cycles a demand request was blocked due to Fill Buffers inavailability.
    L1D_PEND_MISS.PENDING Increments the number of outstanding L1D misses every cycle. Set Cmask = 1 and Edge =1 to count occurrences.
    L1D_PEND_MISS.PENDING_CYCLES Cycles with L1D load Misses outstanding.
    L1D_PEND_MISS.PENDING_CYCLES_ANY Cycles with L1D load Misses outstanding from any thread on physical core.
    L1D_PEND_MISS.REQUEST_FB_FULL Number of times a request needed a FB entry but there was no entry available for it. That is the FB unavailability was dominant reason for blocking the request. A request includes cacheable/uncacheable demands that is load, store or SW prefetch. HWP are e.
    L2_DEMAND_RQSTS.WB_HIT Not rejected writebacks that hit L2 cache.
    L2_LINES_IN.ALL This event counts the number of L2 cache lines brought into the L2 cache. Lines are filled into the L2 cache when there was an L2 miss.
    L2_LINES_IN.E L2 cache lines in E state filling L2.
    L2_LINES_IN.I L2 cache lines in I state filling L2.
    L2_LINES_IN.S L2 cache lines in S state filling L2.
    L2_LINES_OUT.DEMAND_CLEAN Clean L2 cache lines evicted by demand.
    L2_LINES_OUT.DEMAND_DIRTY Dirty L2 cache lines evicted by demand.
    L2_RQSTS.ALL_CODE_RD Counts all L2 code requests.
    L2_RQSTS.ALL_DEMAND_DATA_RD Counts any demand and L1 HW prefetch data load requests to L2.
    L2_RQSTS.ALL_DEMAND_MISS Demand requests that miss L2 cache.
    L2_RQSTS.ALL_DEMAND_REFERENCES Demand requests to L2 cache.
    L2_RQSTS.ALL_PF Counts all L2 HW prefetcher requests.
    L2_RQSTS.ALL_RFO Counts all L2 store RFO requests.
    L2_RQSTS.CODE_RD_HIT Number of instruction fetches that hit the L2 cache.
    L2_RQSTS.CODE_RD_MISS Number of instruction fetches that missed the L2 cache.
    L2_RQSTS.DEMAND_DATA_RD_HIT Demand data read requests that hit L2 cache.
    L2_RQSTS.DEMAND_DATA_RD_MISS Demand data read requests that missed L2, no rejects.
    L2_RQSTS.L2_PF_HIT Counts all L2 HW prefetcher requests that hit L2.
    L2_RQSTS.L2_PF_MISS Counts all L2 HW prefetcher requests that missed L2.
    L2_RQSTS.MISS All requests that missed L2.
    L2_RQSTS.REFERENCES All requests to L2 cache.
    L2_RQSTS.RFO_HIT Counts the number of store RFO requests that hit the L2 cache.
    L2_RQSTS.RFO_MISS Counts the number of store RFO requests that miss the L2 cache.
    L2_TRANS.ALL_PF Any MLC or L3 HW prefetch accessing L2, including rejects.
    L2_TRANS.ALL_REQUESTS Transactions accessing L2 pipe.
    L2_TRANS.CODE_RD L2 cache accesses when fetching instructions.
    L2_TRANS.DEMAND_DATA_RD Demand data read requests that access L2 cache.
    L2_TRANS.L1D_WB L1D writebacks that access L2 cache.
    L2_TRANS.L2_FILL L2 fill requests that access L2 cache.
    L2_TRANS.L2_WB L2 writebacks that access L2 cache.
    L2_TRANS.RFO RFO requests that access L2 cache.
    LD_BLOCKS.NO_SR The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.
    LD_BLOCKS.STORE_FORWARD This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load. The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceding smaller uncompleted store. The penalty for blocked store forwarding is that the load must wait for the store to write its value to the cache before it can be issued.
    LD_BLOCKS_PARTIAL.ADDRESS_ALIAS Aliasing occurs when a load is issued after a store and their memory addresses are offset by 4K. This event counts the number of loads that aliased with a preceding store, resulting in an extended address check in the pipeline which can have a performance impact.
    LOAD_HIT_PRE.HW_PF Non-SW-prefetch load dispatches that hit fill buffer allocated for H/W prefetch.
    LOAD_HIT_PRE.SW_PF Non-SW-prefetch load dispatches that hit fill buffer allocated for S/W prefetch.
    LOCK_CYCLES.CACHE_LOCK_DURATION Cycles in which the L1D is locked.
    LOCK_CYCLES.SPLIT_LOCK_UC_LOCK_DURATION Cycles in which the L1D and L2 are locked, due to a UC lock or split lock.
    LONGEST_LAT_CACHE.MISS This event counts each cache miss condition for references to the last level cache.
    LONGEST_LAT_CACHE.REFERENCE This event counts requests originating from the core that reference a cache line in the last level cache.
    LSD.CYCLES_4_UOPS Cycles 4 Uops delivered by the LSD, but didn't come from the decoder.
    LSD.CYCLES_ACTIVE Cycles Uops delivered by the LSD, but didn't come from the decoder.
    LSD.UOPS Number of uops delivered by the LSD.
    MACHINE_CLEARS.COUNT Number of machine clears (nukes) of any type.
    MACHINE_CLEARS.CYCLES Cycles there was a Nuke. Account for both thread-specific and All Thread Nukes.
    MACHINE_CLEARS.MASKMOV This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0.
    MACHINE_CLEARS.MEMORY_ORDERING This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.
    MACHINE_CLEARS.SMC This event is incremented when self-modifying code (SMC) is detected, which causes a machine clear. Machine clears can have a significant performance impact if they are happening frequently.
    MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT Retired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache.
    MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT_PS This event counts retired load uops that hit in the L3 cache, but required a cross-core snoop which resulted in a HIT in an on-pkg core cache. This does not include hardware prefetches. This is a precise event.
    MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM Retired load uops which data sources were HitM responses from shared L3.
    MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM_PS This event counts retired load uops that hit in the L3 cache, but required a cross-core snoop which resulted in a HITM (hit modified) in an on-pkg core cache. This does not include hardware prefetches. This is a precise event.
    MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS Retired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.
    MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS_PS Retired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.
    MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_NONE Retired load uops which data sources were hits in L3 without snoops required.
    MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_NONE_PS Retired load uops which data sources were hits in L3 without snoops required.
    MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM This event counts retired load uops where the data came from local DRAM. This does not include hardware prefetches.
    MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM_PS This event counts retired load uops where the data came from local DRAM. This does not include hardware prefetches. This is a precise event.
    MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM Retired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI)
    MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_PS Retired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI) (Precise Event)
    MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD Retired load uop whose Data Source was: forwarded from remote cache
    MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD_PS Retired load uop whose Data Source was: forwarded from remote cache (Precise Event)
    MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM Retired load uop whose Data Source was: Remote cache HITM
    MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM_PS Retired load uop whose Data Source was: Remote cache HITM (Precise Event)
    MEM_LOAD_UOPS_RETIRED.HIT_LFB Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.
    MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.
    MEM_LOAD_UOPS_RETIRED.L1_HIT Retired load uops with L1 cache hits as data sources.
    MEM_LOAD_UOPS_RETIRED.L1_HIT_PS Retired load uops with L1 cache hits as data sources.
    MEM_LOAD_UOPS_RETIRED.L1_MISS Retired load uops missed L1 cache as data sources.
    MEM_LOAD_UOPS_RETIRED.L1_MISS_PS This event counts retired load uops in which data sources missed in the L1 cache. This does not include hardware prefetches. This is a precise event.
    MEM_LOAD_UOPS_RETIRED.L2_HIT Retired load uops with L2 cache hits as data sources.
    MEM_LOAD_UOPS_RETIRED.L2_HIT_PS Retired load uops with L2 cache hits as data sources.
    MEM_LOAD_UOPS_RETIRED.L2_MISS Retired load uops missed L2. Unknown data source excluded.
    MEM_LOAD_UOPS_RETIRED.L2_MISS_PS Retired load uops with L2 cache misses as data sources.
    MEM_LOAD_UOPS_RETIRED.L3_HIT Retired load uops with L3 cache hits as data sources.
    MEM_LOAD_UOPS_RETIRED.L3_HIT_PS This event counts retired load uops in which data sources were data hits in the L3 cache without snoops required. This does not include hardware prefetches. This is a precise event.
    MEM_LOAD_UOPS_RETIRED.L3_MISS Retired load uops missed L3. Excludes unknown data source .
    MEM_LOAD_UOPS_RETIRED.L3_MISS_PS Miss in last-level (L3) cache. Excludes Unknown data-source.
    MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128 Loads with latency value being above 128.
    MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16 Loads with latency value being above 16.
    MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256 Loads with latency value being above 256.
    MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32 Loads with latency value being above 32.
    MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4 Loads with latency value being above 4.
    MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512 Loads with latency value being above 512.
    MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64 Loads with latency value being above 64.
    MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8 Loads with latency value being above 8.
    MEM_UOPS_RETIRED.ALL_LOADS All retired load uops.
    MEM_UOPS_RETIRED.ALL_LOADS_PS All retired load uops. (precise Event)
    MEM_UOPS_RETIRED.ALL_STORES All retired store uops.
    MEM_UOPS_RETIRED.ALL_STORES_PS This event counts all store uops retired. This is a precise event.
    MEM_UOPS_RETIRED.LOCK_LOADS Retired load uops with locked access.
    MEM_UOPS_RETIRED.LOCK_LOADS_PS Retired load uops with locked access. (precise Event)
    MEM_UOPS_RETIRED.SPLIT_LOADS Retired load uops that split across a cacheline boundary.
    MEM_UOPS_RETIRED.SPLIT_LOADS_PS This event counts load uops retired which had memory addresses spilt across 2 cache lines. A line split is across 64B cache-lines which may include a page split (4K). This is a precise event.
    MEM_UOPS_RETIRED.SPLIT_STORES Retired store uops that split across a cacheline boundary.
    MEM_UOPS_RETIRED.SPLIT_STORES_PS This event counts store uops retired which had memory addresses spilt across 2 cache lines. A line split is across 64B cache-lines which may include a page split (4K). This is a precise event.
    MEM_UOPS_RETIRED.STLB_MISS_LOADS Retired load uops that miss the STLB.
    MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS Retired load uops that miss the STLB. (precise Event)
    MEM_UOPS_RETIRED.STLB_MISS_STORES Retired store uops that miss the STLB.
    MEM_UOPS_RETIRED.STLB_MISS_STORES_PS Retired store uops that miss the STLB. (precise Event)
    MISALIGN_MEM_REF.LOADS Speculative cache-line split load uops dispatched to L1D.
    MISALIGN_MEM_REF.STORES Speculative cache-line split store-address uops dispatched to L1D.
    MOVE_ELIMINATION.INT_ELIMINATED Number of integer move elimination candidate uops that were eliminated.
    MOVE_ELIMINATION.INT_NOT_ELIMINATED Number of integer move elimination candidate uops that were not eliminated.
    MOVE_ELIMINATION.SIMD_ELIMINATED Number of SIMD move elimination candidate uops that were eliminated.
    MOVE_ELIMINATION.SIMD_NOT_ELIMINATED Number of SIMD move elimination candidate uops that were not eliminated.
    OFFCORE_REQUESTS.ALL_DATA_RD Data read requests sent to uncore (demand and prefetch).
    OFFCORE_REQUESTS.DEMAND_CODE_RD Demand code read requests sent to uncore.
    OFFCORE_REQUESTS.DEMAND_DATA_RD Demand data read requests sent to uncore.
    OFFCORE_REQUESTS.DEMAND_RFO Demand RFO read requests sent to uncore, including regular RFOs, locks, ItoM.
    OFFCORE_REQUESTS_BUFFER.SQ_FULL Offcore requests buffer cannot take more entries for this thread core.
    OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD Offcore outstanding cacheable data read transactions in SQ to uncore. Set Cmask=1 to count cycles.
    OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.
    OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore.
    OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle.
    OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD Offcore outstanding Demand code Read transactions in SQ to uncore. Set Cmask=1 to count cycles.
    OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD Offcore outstanding demand data read transactions in SQ to uncore. Set Cmask=1 to count cycles.
    OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD_GE_6 Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.
    OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO Offcore outstanding RFO store transactions in SQ to uncore. Set Cmask=1 to count cycles.
    OFFCORE_RESPONSE Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.
    OTHER_ASSISTS.ANY_WB_ASSIST Number of microcode assists invoked by HW upon uop writeback.
    OTHER_ASSISTS.AVX_TO_SSE Number of transitions from AVX-256 to legacy SSE when penalty applicable.
    OTHER_ASSISTS.SSE_TO_AVX Number of transitions from SSE to AVX-256 when penalty applicable.
    PAGE_WALKER_LOADS.DTLB_L1 Number of DTLB page walker loads that hit in the L1+FB.
    PAGE_WALKER_LOADS.DTLB_L2 Number of DTLB page walker loads that hit in the L2.
    PAGE_WALKER_LOADS.DTLB_L3 Number of DTLB page walker loads that hit in the L3.
    PAGE_WALKER_LOADS.DTLB_MEMORY Number of DTLB page walker loads from memory.
    PAGE_WALKER_LOADS.EPT_DTLB_L1 Counts the number of Extended Page Table walks from the DTLB that hit in the L1 and FB.
    PAGE_WALKER_LOADS.EPT_DTLB_L2 Counts the number of Extended Page Table walks from the DTLB that hit in the L2.
    PAGE_WALKER_LOADS.EPT_DTLB_L3 Counts the number of Extended Page Table walks from the DTLB that hit in the L3.
    PAGE_WALKER_LOADS.EPT_DTLB_MEMORY Counts the number of Extended Page Table walks from the DTLB that hit in memory.
    PAGE_WALKER_LOADS.EPT_ITLB_L1 Counts the number of Extended Page Table walks from the ITLB that hit in the L1 and FB.
    PAGE_WALKER_LOADS.EPT_ITLB_L2 Counts the number of Extended Page Table walks from the ITLB that hit in the L2.
    PAGE_WALKER_LOADS.EPT_ITLB_L3 Counts the number of Extended Page Table walks from the ITLB that hit in the L2.
    PAGE_WALKER_LOADS.EPT_ITLB_MEMORY Counts the number of Extended Page Table walks from the ITLB that hit in memory.
    PAGE_WALKER_LOADS.ITLB_L1 Number of ITLB page walker loads that hit in the L1+FB.
    PAGE_WALKER_LOADS.ITLB_L2 Number of ITLB page walker loads that hit in the L2.
    PAGE_WALKER_LOADS.ITLB_L3 Number of ITLB page walker loads that hit in the L3.
    PAGE_WALKER_LOADS.ITLB_MEMORY Number of ITLB page walker loads from memory.
    RESOURCE_STALLS.ANY Cycles allocation is stalled due to resource related reason.
    RESOURCE_STALLS.ROB Cycles stalled due to re-order buffer full.
    RESOURCE_STALLS.RS Cycles stalled due to no eligible RS entry available.
    RESOURCE_STALLS.SB This event counts cycles during which no instructions were allocated because no Store Buffers (SB) were available.
    ROB_MISC_EVENTS.LBR_INSERTS Count cases of saving new LBR records by hardware.
    RS_EVENTS.EMPTY_CYCLES This event counts cycles when the Reservation Station ( RS ) is empty for the thread. The RS is a structure that buffers allocated micro-ops from the Front-end. If there are many cycles when the RS is empty, it may represent an underflow of instructions delivered from the Front-end.
    RS_EVENTS.EMPTY_END Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.
    RTM_RETIRED.ABORTED Number of times an RTM execution aborted due to any reasons (multiple categories may count as one).
    RTM_RETIRED.ABORTED_MISC1 Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).
    RTM_RETIRED.ABORTED_MISC2 Number of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts).
    RTM_RETIRED.ABORTED_MISC3 Number of times an RTM execution aborted due to HLE-unfriendly instructions.
    RTM_RETIRED.ABORTED_MISC4 Number of times an RTM execution aborted due to incompatible memory type.
    RTM_RETIRED.ABORTED_MISC5 Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).
    RTM_RETIRED.ABORTED_PS Number of times an RTM execution aborted due to any reasons (multiple categories may count as one).
    RTM_RETIRED.COMMIT Number of times an RTM execution successfully committed.
    RTM_RETIRED.START Number of times an RTM execution started.
    SQ_MISC.SPLIT_LOCK Split locks in SQ
    TLB_FLUSH.DTLB_THREAD DTLB flush attempts of the thread-specific entries.
    TLB_FLUSH.STLB_ANY Count number of STLB flush attempts.
    TX_EXEC.MISC1 Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort.
    TX_EXEC.MISC2 Counts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional region.
    TX_EXEC.MISC3 Counts the number of times an instruction execution caused the transactional nest count supported to be exceeded.
    TX_EXEC.MISC4 Counts the number of times a XBEGIN instruction was executed inside an HLE transactional region.
    TX_EXEC.MISC5 Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region.
    TX_MEM.ABORT_CAPACITY_WRITE Number of times a transactional abort was signaled due to a data capacity limitation for transactional writes.
    TX_MEM.ABORT_CONFLICT Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address.
    TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH Number of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision buffer.
    TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY Number of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zero.
    TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT Number of times an HLE transactional execution aborted due to an unsupported read alignment from the elision buffer.
    TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK Number of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision buffer.
    TX_MEM.HLE_ELISION_BUFFER_FULL Number of times HLE lock could not be elided due to ElisionBufferAvailable being zero.
    UOPS_DISPATCHED_PORT.PORT_0 Cycles per thread when uops are executed in port 0.
    UOPS_DISPATCHED_PORT.PORT_1 Cycles per thread when uops are executed in port 1.
    UOPS_DISPATCHED_PORT.PORT_2 Cycles per thread when uops are executed in port 2.
    UOPS_DISPATCHED_PORT.PORT_3 Cycles per thread when uops are executed in port 3.
    UOPS_DISPATCHED_PORT.PORT_4 Cycles per thread when uops are executed in port 4.
    UOPS_DISPATCHED_PORT.PORT_5 Cycles per thread when uops are executed in port 5.
    UOPS_DISPATCHED_PORT.PORT_6 Cycles per thread when uops are executed in port 6.
    UOPS_DISPATCHED_PORT.PORT_7 Cycles per thread when uops are executed in port 7.
    UOPS_EXECUTED.CORE Counts total number of uops to be executed per-core each cycle.
    UOPS_EXECUTED.CORE_CYCLES_GE_1 Cycles at least 1 micro-op is executed from any thread on physical core.
    UOPS_EXECUTED.CORE_CYCLES_GE_2 Cycles at least 2 micro-op is executed from any thread on physical core.
    UOPS_EXECUTED.CORE_CYCLES_GE_3 Cycles at least 3 micro-op is executed from any thread on physical core.
    UOPS_EXECUTED.CORE_CYCLES_GE_4 Cycles at least 4 micro-op is executed from any thread on physical core.
    UOPS_EXECUTED.CORE_CYCLES_NONE Cycles with no micro-ops executed from any thread on physical core.
    UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC This events counts the cycles where at least one uop was executed. It is counted per thread.
    UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC This events counts the cycles where at least two uop were executed. It is counted per thread.
    UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC This events counts the cycles where at least three uop were executed. It is counted per thread.
    UOPS_EXECUTED.CYCLES_GE_4_UOPS_EXEC Cycles where at least 4 uops were executed per-thread.
    UOPS_EXECUTED.STALL_CYCLES Counts number of cycles no uops were dispatched to be executed on this thread.
    UOPS_EXECUTED_PORT.PORT_0 Cycles which a uop is dispatched on port 0 in this thread.
    UOPS_EXECUTED_PORT.PORT_0_CORE Cycles per core when uops are exectuted in port 0.
    UOPS_EXECUTED_PORT.PORT_1 Cycles which a uop is dispatched on port 1 in this thread.
    UOPS_EXECUTED_PORT.PORT_1_CORE Cycles per core when uops are exectuted in port 1.
    UOPS_EXECUTED_PORT.PORT_2 Cycles which a uop is dispatched on port 2 in this thread.
    UOPS_EXECUTED_PORT.PORT_2_CORE Cycles per core when uops are dispatched to port 2.
    UOPS_EXECUTED_PORT.PORT_3 Cycles which a uop is dispatched on port 3 in this thread.
    UOPS_EXECUTED_PORT.PORT_3_CORE Cycles per core when uops are dispatched to port 3.
    UOPS_EXECUTED_PORT.PORT_4 Cycles which a uop is dispatched on port 4 in this thread.
    UOPS_EXECUTED_PORT.PORT_4_CORE Cycles per core when uops are exectuted in port 4.
    UOPS_EXECUTED_PORT.PORT_5 Cycles which a uop is dispatched on port 5 in this thread.
    UOPS_EXECUTED_PORT.PORT_5_CORE Cycles per core when uops are exectuted in port 5.
    UOPS_EXECUTED_PORT.PORT_6 Cycles which a uop is dispatched on port 6 in this thread.
    UOPS_EXECUTED_PORT.PORT_6_CORE Cycles per core when uops are exectuted in port 6.
    UOPS_EXECUTED_PORT.PORT_7 Cycles which a uop is dispatched on port 7 in this thread.
    UOPS_EXECUTED_PORT.PORT_7_CORE Cycles per core when uops are dispatched to port 7.
    UOPS_ISSUED.ANY This event counts the number of uops issued by the Front-end of the pipeline to the Back-end. This event is counted at the allocation stage and will count both retired and non-retired uops.
    UOPS_ISSUED.CORE_STALL_CYCLES Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threads.
    UOPS_ISSUED.FLAGS_MERGE Number of flags-merge uops allocated. Such uops add delay.
    UOPS_ISSUED.SINGLE_MUL Number of multiply packed/scalar single precision uops allocated.
    UOPS_ISSUED.SLOW_LEA Number of slow LEA or similar uops allocated. Such uop has 3 sources (for example, 2 sources + immediate) regardless of whether it is a result of LEA instruction or not.
    UOPS_ISSUED.STALL_CYCLES Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread.
    UOPS_RETIRED.ALL Counts the number of micro-ops retired. Use Cmask=1 and invert to count active cycles or stalled cycles.
    UOPS_RETIRED.ALL_PS Actually retired uops.
    UOPS_RETIRED.CORE_STALL_CYCLES Cycles without actually retired uops.
    UOPS_RETIRED.RETIRE_SLOTS This event counts the number of retirement slots used each cycle. There are potentially 4 slots that can be used each cycle - meaning, 4 uops or 4 instructions could retire each cycle.
    UOPS_RETIRED.RETIRE_SLOTS_PS Retirement slots used.
    UOPS_RETIRED.STALL_CYCLES Cycles without actually retired uops.
    UOPS_RETIRED.TOTAL_CYCLES Cycles with less than 10 actually retired uops.
    UNC_C_BOUNCE_CONTROL Bounce Control
    UNC_C_CLOCKTICKS Uncore Clocks
    UNC_C_COUNTER0_OCCUPANCY Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entry.
    UNC_C_FAST_ASSERTED Counts the number of cycles either the local distress or incoming distress signals are asserted. Incoming distress includes both up and dn.
    UNC_C_LLC_LOOKUP.ANY Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ. This does not include lookups originating from the ISMQ.
    UNC_C_LLC_LOOKUP.DATA_READ Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Read transactions
    UNC_C_LLC_LOOKUP.NID Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Qualify one of the other subevents by the Target NID. The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid. In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the system.
    UNC_C_LLC_LOOKUP.READ Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Read transactions
    UNC_C_LLC_LOOKUP.REMOTE_SNOOP Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Filters for only snoop requests coming from the remote socket(s) through the IPQ.
    UNC_C_LLC_LOOKUP.WRITE Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CBoGlCtrl[22:18] bits correspond to [FMESI] state.; Writeback transactions from L2 to the LLC This includes all write transactions -- both Cachable and UC.
    UNC_C_LLC_VICTIMS.E_STATE Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.
    UNC_C_LLC_VICTIMS.F_STATE Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.
    UNC_C_LLC_VICTIMS.I_STATE Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.
    UNC_C_LLC_VICTIMS.M_STATE Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.
    UNC_C_LLC_VICTIMS.MISS Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.
    UNC_C_LLC_VICTIMS.NID Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.; Qualify one of the other subevents by the Target NID. The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid. In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the system.
    UNC_C_LLC_VICTIMS.S_STATE Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.
    UNC_C_MISC.CVZERO_PREFETCH_MISS Miscellaneous events in the Cbo.
    UNC_C_MISC.CVZERO_PREFETCH_VICTIM Miscellaneous events in the Cbo.
    UNC_C_MISC.RFO_HIT_S Miscellaneous events in the Cbo.; Number of times that an RFO hit in S state. This is useful for determining if it might be good for a workload to use RspIWB instead of RspSWB.
    UNC_C_MISC.RSPI_WAS_FSE Miscellaneous events in the Cbo.; Counts the number of times when a Snoop hit in FSE states and triggered a silent eviction. This is useful because this information is lost in the PRE encodings.
    UNC_C_MISC.STARTED Miscellaneous events in the Cbo.
    UNC_C_MISC.WC_ALIASING Miscellaneous events in the Cbo.; Counts the number of times that a USWC write (WCIL(F)) transaction hit in the LLC in M state, triggering a WBMtoI followed by the USWC write. This occurs when there is WC aliasing.
    UNC_C_QLRU.AGE0 How often age was set to 0
    UNC_C_QLRU.AGE1 How often age was set to 1
    UNC_C_QLRU.AGE2 How often age was set to 2
    UNC_C_QLRU.AGE3 How often age was set to 3
    UNC_C_QLRU.LRU_DECREMENT How often all LRU bits were decremented by 1
    UNC_C_QLRU.VICTIM_NON_ZERO How often we picked a victim that had a non-zero age
    UNC_C_RING_AD_USED.ALL Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_AD_USED.DOWN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_AD_USED.DOWN_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity.
    UNC_C_RING_AD_USED.DOWN_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity.
    UNC_C_RING_AD_USED.UP Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_AD_USED.UP_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity.
    UNC_C_RING_AD_USED.UP_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity.
    UNC_C_RING_AK_USED.ALL Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_AK_USED.DOWN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_AK_USED.DOWN_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity.
    UNC_C_RING_AK_USED.DOWN_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity.
    UNC_C_RING_AK_USED.UP Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_AK_USED.UP_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity.
    UNC_C_RING_AK_USED.UP_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity.
    UNC_C_RING_BL_USED.ALL Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_BL_USED.DOWN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_BL_USED.DOWN_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Even ring polarity.
    UNC_C_RING_BL_USED.DOWN_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity.
    UNC_C_RING_BL_USED.UP Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_C_RING_BL_USED.UP_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity.
    UNC_C_RING_BL_USED.UP_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity.
    UNC_C_RING_BOUNCES.AD Number of LLC responses that bounced on the Ring.; AD
    UNC_C_RING_BOUNCES.AK Number of LLC responses that bounced on the Ring.; AK
    UNC_C_RING_BOUNCES.BL Number of LLC responses that bounced on the Ring.; BL
    UNC_C_RING_BOUNCES.IV Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.
    UNC_C_RING_IV_USED.ANY Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring in HSX Therefore, if one wants to monitor the "Even" ring, they should select both UP_EVEN and DN_EVEN. To monitor the "Odd" ring, they should select both UP_ODD and DN_ODD.; Filters any polarity
    UNC_C_RING_IV_USED.DN Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring in HSX Therefore, if one wants to monitor the "Even" ring, they should select both UP_EVEN and DN_EVEN. To monitor the "Odd" ring, they should select both UP_ODD and DN_ODD.; Filters any polarity
    UNC_C_RING_IV_USED.DOWN Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring in HSX Therefore, if one wants to monitor the "Even" ring, they should select both UP_EVEN and DN_EVEN. To monitor the "Odd" ring, they should select both UP_ODD and DN_ODD.; Filters for Down polarity
    UNC_C_RING_IV_USED.UP Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring in HSX Therefore, if one wants to monitor the "Even" ring, they should select both UP_EVEN and DN_EVEN. To monitor the "Odd" ring, they should select both UP_ODD and DN_ODD.; Filters any polarity
    UNC_C_RING_SINK_STARVED.AD AD
    UNC_C_RING_SINK_STARVED.AK AK
    UNC_C_RING_SINK_STARVED.BL BL
    UNC_C_RING_SINK_STARVED.IV IV
    UNC_C_RING_SRC_THRTL Number of cycles the Cbo is actively throttling traffic onto the Ring in order to limit bounce traffic.
    UNC_C_RxR_EXT_STARVED.IPQ Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; IPQ is externally startved and therefore we are blocking the IRQ.
    UNC_C_RxR_EXT_STARVED.IRQ Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; IRQ is externally starved and therefore we are blocking the IPQ.
    UNC_C_RxR_EXT_STARVED.ISMQ_BIDS Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; Number of times that the ISMQ Bid.
    UNC_C_RxR_EXT_STARVED.PRQ Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.
    UNC_C_RxR_INSERTS.IPQ Counts number of allocations per cycle into the specified Ingress queue.
    UNC_C_RxR_INSERTS.IRQ Counts number of allocations per cycle into the specified Ingress queue.
    UNC_C_RxR_INSERTS.IRQ_REJ Counts number of allocations per cycle into the specified Ingress queue.
    UNC_C_RxR_INSERTS.PRQ Counts number of allocations per cycle into the specified Ingress queue.
    UNC_C_RxR_INSERTS.PRQ_REJ Counts number of allocations per cycle into the specified Ingress queue.
    UNC_C_RxR_INT_STARVED.IPQ Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue.; Cycles with the IPQ in Internal Starvation.
    UNC_C_RxR_INT_STARVED.IRQ Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue.; Cycles with the IRQ in Internal Starvation.
    UNC_C_RxR_INT_STARVED.ISMQ Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue.; Cycles with the ISMQ in Internal Starvation.
    UNC_C_RxR_INT_STARVED.PRQ Counts cycles in internal starvation. This occurs when one (or more) of the entries in the ingress queue are being starved out by other entries in that queue.
    UNC_C_RxR_IPQ_RETRY.ADDR_CONFLICT Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.; Counts the number of times that a request form the IPQ was retried because of a TOR reject from an address conflicts. Address conflicts out of the IPQ should be rare. They will generally only occur if two different sockets are sending requests to the same address at the same time. This is a true "conflict" case, unlike the IPQ Address Conflict which is commonly caused by prefetching characteristics.
    UNC_C_RxR_IPQ_RETRY.ANY Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.; Counts the number of times that a request form the IPQ was retried because of a TOR reject. TOR rejects from the IPQ can be caused by the Egress being full or Address Conflicts.
    UNC_C_RxR_IPQ_RETRY.FULL Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.; Counts the number of times that a request form the IPQ was retried because of a TOR reject from the Egress being full. IPQ requests make use of the AD Egress for regular responses, the BL egress to forward data, and the AK egress to return credits.
    UNC_C_RxR_IPQ_RETRY.QPI_CREDITS Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.
    UNC_C_RxR_IPQ_RETRY2.AD_SBO Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.; Counts the number of times that a request from the IPQ was retried because of it lacked credits to send an AD packet to the Sbo.
    UNC_C_RxR_IPQ_RETRY2.TARGET Number of times a snoop (probe) request had to retry. Filters exist to cover some of the common cases retries.; Counts the number of times that a request from the IPQ was retried filtered by the Target NodeID as specified in the Cbox's Filter register.
    UNC_C_RxR_IRQ_RETRY.ADDR_CONFLICT Counts the number of times that a request from the IRQ was retried because of an address match in the TOR. In order to maintain coherency, requests to the same address are not allowed to pass each other up in the Cbo. Therefore, if there is an outstanding request to a given address, one cannot issue another request to that address until it is complete. This comes up most commonly with prefetches. Outstanding prefetches occasionally will not complete their memory fetch and a demand request to the same address will then sit in the IRQ and get retried until the prefetch fills the data into the LLC. Therefore, it will not be uncommon to see this case in high bandwidth streaming workloads when the LLC Prefetcher in the core is enabled.
    UNC_C_RxR_IRQ_RETRY.ANY Counts the number of IRQ retries that occur. Requests from the IRQ are retried if they are rejected from the TOR pipeline for a variety of reasons. Some of the most common reasons include if the Egress is full, there are no RTIDs, or there is a Physical Address match to another outstanding request.
    UNC_C_RxR_IRQ_RETRY.FULL Counts the number of times that a request from the IRQ was retried because it failed to acquire an entry in the Egress. The egress is the buffer that queues up for allocating onto the ring. IRQ requests can make use of all four rings and all four Egresses. If any of the queues that a given request needs to make use of are full, the request will be retried.
    UNC_C_RxR_IRQ_RETRY.IIO_CREDITS Number of times a request attempted to acquire the NCS/NCB credit for sending messages on BL to the IIO. There is a single credit in each CBo that is shared between the NCS and NCB message classes for sending transactions on the BL ring (such as read data) to the IIO.
    UNC_C_RxR_IRQ_RETRY.NID Qualify one of the other subevents by a given RTID destination NID. The NID is programmed in Cn_MSR_PMON_BOX_FILTER1.nid.
    UNC_C_RxR_IRQ_RETRY.QPI_CREDITS Number of requests rejects because of lack of QPI Ingress credits. These credits are required in order to send transactions to the QPI agent. Please see the QPI_IGR_CREDITS events for more information.
    UNC_C_RxR_IRQ_RETRY.RTID Counts the number of times that requests from the IRQ were retried because there were no RTIDs available. RTIDs are required after a request misses the LLC and needs to send snoops and/or requests to memory. If there are no RTIDs available, requests will queue up in the IRQ and retry until one becomes available. Note that there are multiple RTID pools for the different sockets. There may be cases where the local RTIDs are all used, but requests destined for remote memory can still acquire an RTID because there are remote RTIDs available. This event does not provide any filtering for this case.
    UNC_C_RxR_IRQ_RETRY2.AD_SBO Counts the number of times that a request from the IPQ was retried because of it lacked credits to send an AD packet to the Sbo.
    UNC_C_RxR_IRQ_RETRY2.BL_SBO Counts the number of times that a request from the IPQ was retried because of it lacked credits to send an BL packet to the Sbo.
    UNC_C_RxR_IRQ_RETRY2.TARGET Counts the number of times that a request from the IPQ was retried filtered by the Target NodeID as specified in the Cbox's Filter register.
    UNC_C_RxR_ISMQ_RETRY.ANY Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.; Counts the total number of times that a request from the ISMQ retried because of a TOR reject. ISMQ requests generally will not need to retry (or at least ISMQ retries are less common than IRQ retries). ISMQ requests will retry if they are not able to acquire a needed Egress credit to get onto the ring, or for cache evictions that need to acquire an RTID. Most ISMQ requests already have an RTID, so eviction retries will be less common here.
    UNC_C_RxR_ISMQ_RETRY.FULL Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.; Counts the number of times that a request from the ISMQ retried because of a TOR reject caused by a lack of Egress credits. The egress is the buffer that queues up for allocating onto the ring. If any of the Egress queues that a given request needs to make use of are full, the request will be retried.
    UNC_C_RxR_ISMQ_RETRY.IIO_CREDITS Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.; Number of times a request attempted to acquire the NCS/NCB credit for sending messages on BL to the IIO. There is a single credit in each CBo that is shared between the NCS and NCB message classes for sending transactions on the BL ring (such as read data) to the IIO.
    UNC_C_RxR_ISMQ_RETRY.NID Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.; Qualify one of the other subevents by a given RTID destination NID. The NID is programmed in Cn_MSR_PMON_BOX_FILTER1.nid.
    UNC_C_RxR_ISMQ_RETRY.QPI_CREDITS Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.
    UNC_C_RxR_ISMQ_RETRY.RTID Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.; Counts the number of times that a request from the ISMQ retried because of a TOR reject caused by no RTIDs. M-state cache evictions are serviced through the ISMQ, and must acquire an RTID in order to write back to memory. If no RTIDs are available, they will be retried.
    UNC_C_RxR_ISMQ_RETRY.WB_CREDITS Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.; Qualify one of the other subevents by a given RTID destination NID. The NID is programmed in Cn_MSR_PMON_BOX_FILTER1.nid.
    UNC_C_RxR_ISMQ_RETRY2.AD_SBO Counts the number of times that a request from the ISMQ was retried because of it lacked credits to send an AD packet to the Sbo.
    UNC_C_RxR_ISMQ_RETRY2.BL_SBO Counts the number of times that a request from the ISMQ was retried because of it lacked credits to send an BL packet to the Sbo.
    UNC_C_RxR_ISMQ_RETRY2.TARGET Counts the number of times that a request from the ISMQ was retried filtered by the Target NodeID as specified in the Cbox's Filter register.
    UNC_C_RxR_OCCUPANCY.IPQ Counts number of entries in the specified Ingress queue in each cycle.
    UNC_C_RxR_OCCUPANCY.IRQ Counts number of entries in the specified Ingress queue in each cycle.
    UNC_C_RxR_OCCUPANCY.IRQ_REJ Counts number of entries in the specified Ingress queue in each cycle.
    UNC_C_RxR_OCCUPANCY.PRQ_REJ Counts number of entries in the specified Ingress queue in each cycle.
    UNC_C_SBO_CREDIT_OCCUPANCY.AD Number of Sbo credits in use in a given cycle, per ring. Each Cbo is assigned an Sbo it can communicate with.
    UNC_C_SBO_CREDIT_OCCUPANCY.BL Number of Sbo credits in use in a given cycle, per ring. Each Cbo is assigned an Sbo it can communicate with.
    UNC_C_SBO_CREDITS_ACQUIRED.AD Number of Sbo credits acquired in a given cycle, per ring. Each Cbo is assigned an Sbo it can communicate with.
    UNC_C_SBO_CREDITS_ACQUIRED.BL Number of Sbo credits acquired in a given cycle, per ring. Each Cbo is assigned an Sbo it can communicate with.
    UNC_C_TOR_INSERTS.ALL Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserted into the TOR. This includes requests that reside in the TOR for a short time, such as LLC Hits that do not need to snoop cores or requests that get rejected and have to be retried through one of the ingress queues. The TOR is more commonly a bottleneck in skews with smaller core counts, where the ratio of RTIDs to TOR entries is larger. Note that there are reserved TOR entries for various request types, so it is possible that a given request type be blocked with an occupancy that is less than 20. Also note that generally requests will not be able to arbitrate into the TOR pipeline if there are no available TOR slots.
    UNC_C_TOR_INSERTS.EVICTION Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Eviction transactions inserted into the TOR. Evictions can be quick, such as when the line is in the F, S, or E states and no core valid bits are set. They can also be longer if either CV bits are set (so the cores need to be snooped) and/or if there is a HitM (in which case it is necessary to write the request out to memory).
    UNC_C_TOR_INSERTS.LOCAL Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserted into the TOR that are satisifed by locally HOMed memory.
    UNC_C_TOR_INSERTS.LOCAL_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions, satisifed by an opcode, inserted into the TOR that are satisifed by locally HOMed memory.
    UNC_C_TOR_INSERTS.MISS_LOCAL Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions inserted into the TOR that are satisifed by locally HOMed memory.
    UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions, satisifed by an opcode, inserted into the TOR that are satisifed by locally HOMed memory.
    UNC_C_TOR_INSERTS.MISS_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions inserted into the TOR that match an opcode.
    UNC_C_TOR_INSERTS.MISS_REMOTE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions inserted into the TOR that are satisifed by remote caches or remote memory.
    UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions, satisifed by an opcode, inserted into the TOR that are satisifed by remote caches or remote memory.
    UNC_C_TOR_INSERTS.NID_ALL Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All NID matched (matches an RTID destination) transactions inserted into the TOR. The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid. In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the system.
    UNC_C_TOR_INSERTS.NID_EVICTION Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; NID matched eviction transactions inserted into the TOR.
    UNC_C_TOR_INSERTS.NID_MISS_ALL Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All NID matched miss requests that were inserted into the TOR.
    UNC_C_TOR_INSERTS.NID_MISS_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Miss transactions inserted into the TOR that match a NID and an opcode.
    UNC_C_TOR_INSERTS.NID_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Transactions inserted into the TOR that match a NID and an opcode.
    UNC_C_TOR_INSERTS.NID_WB Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; NID matched write transactions inserted into the TOR.
    UNC_C_TOR_INSERTS.OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Transactions inserted into the TOR that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc)
    UNC_C_TOR_INSERTS.REMOTE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions inserted into the TOR that are satisifed by remote caches or remote memory.
    UNC_C_TOR_INSERTS.REMOTE_OPCODE Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; All transactions, satisifed by an opcode, inserted into the TOR that are satisifed by remote caches or remote memory.
    UNC_C_TOR_INSERTS.WB Counts the number of entries successfuly inserted into the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182).; Write transactions inserted into the TOR. This does not include "RFO", but actual operations that contain data being sent from the core.
    UNC_C_TOR_OCCUPANCY.ALL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); All valid TOR entries. This includes requests that reside in the TOR for a short time, such as LLC Hits that do not need to snoop cores or requests that get rejected and have to be retried through one of the ingress queues. The TOR is more commonly a bottleneck in skews with smaller core counts, where the ratio of RTIDs to TOR entries is larger. Note that there are reserved TOR entries for various request types, so it is possible that a given request type be blocked with an occupancy that is less than 20. Also note that generally requests will not be able to arbitrate into the TOR pipeline if there are no available TOR slots.
    UNC_C_TOR_OCCUPANCY.EVICTION For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding eviction transactions in the TOR. Evictions can be quick, such as when the line is in the F, S, or E states and no core valid bits are set. They can also be longer if either CV bits are set (so the cores need to be snooped) and/or if there is a HitM (in which case it is necessary to write the request out to memory).
    UNC_C_TOR_OCCUPANCY.LOCAL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)
    UNC_C_TOR_OCCUPANCY.LOCAL_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding transactions, satisifed by an opcode, in the TOR that are satisifed by locally HOMed memory.
    UNC_C_TOR_OCCUPANCY.MISS_ALL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding miss requests in the TOR. 'Miss' means the allocation requires an RTID. This generally means that the request was sent to memory or MMIO.
    UNC_C_TOR_OCCUPANCY.MISS_LOCAL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)
    UNC_C_TOR_OCCUPANCY.MISS_LOCAL_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding Miss transactions, satisifed by an opcode, in the TOR that are satisifed by locally HOMed memory.
    UNC_C_TOR_OCCUPANCY.MISS_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); TOR entries for miss transactions that match an opcode. This generally means that the request was sent to memory or MMIO.
    UNC_C_TOR_OCCUPANCY.MISS_REMOTE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)
    UNC_C_TOR_OCCUPANCY.MISS_REMOTE_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding Miss transactions, satisifed by an opcode, in the TOR that are satisifed by remote caches or remote memory.
    UNC_C_TOR_OCCUPANCY.NID_ALL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of NID matched outstanding requests in the TOR. The NID is programmed in Cn_MSR_PMON_BOX_FILTER.nid.In conjunction with STATE = I, it is possible to monitor misses to specific NIDs in the system.
    UNC_C_TOR_OCCUPANCY.NID_EVICTION For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding NID matched eviction transactions in the TOR .
    UNC_C_TOR_OCCUPANCY.NID_MISS_ALL For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding Miss requests in the TOR that match a NID.
    UNC_C_TOR_OCCUPANCY.NID_MISS_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding Miss requests in the TOR that match a NID and an opcode.
    UNC_C_TOR_OCCUPANCY.NID_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); TOR entries that match a NID and an opcode.
    UNC_C_TOR_OCCUPANCY.NID_WB For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); NID matched write transactions int the TOR.
    UNC_C_TOR_OCCUPANCY.OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); TOR entries that match an opcode (matched by Cn_MSR_PMON_BOX_FILTER.opc).
    UNC_C_TOR_OCCUPANCY.REMOTE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182)
    UNC_C_TOR_OCCUPANCY.REMOTE_OPCODE For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Number of outstanding transactions, satisifed by an opcode, in the TOR that are satisifed by remote caches or remote memory.
    UNC_C_TOR_OCCUPANCY.WB For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select "MISS_OPC_MATCH" and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); Write transactions in the TOR. This does not include "RFO", but actual operations that contain data being sent from the core.
    UNC_C_TxR_ADS_USED.AD Onto AD Ring
    UNC_C_TxR_ADS_USED.AK Onto AK Ring
    UNC_C_TxR_ADS_USED.BL Onto BL Ring
    UNC_C_TxR_INSERTS.AD_CACHE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.; Ring transactions from the Cachebo destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.
    UNC_C_TxR_INSERTS.AD_CORE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.; Ring transactions from the Corebo destined for the AD ring. This is commonly used for outbound requests.
    UNC_C_TxR_INSERTS.AK_CACHE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.; Ring transactions from the Cachebo destined for the AK ring. This is commonly used for credit returns and GO responses.
    UNC_C_TxR_INSERTS.AK_CORE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.; Ring transactions from the Corebo destined for the AK ring. This is commonly used for snoop responses coming from the core and destined for a Cachebo.
    UNC_C_TxR_INSERTS.BL_CACHE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.; Ring transactions from the Cachebo destined for the BL ring. This is commonly used to send data from the cache to various destinations.
    UNC_C_TxR_INSERTS.BL_CORE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.; Ring transactions from the Corebo destined for the BL ring. This is commonly used for transfering writeback data to the cache.
    UNC_C_TxR_INSERTS.IV_CACHE Number of allocations into the Cbo Egress. The Egress is used to queue up requests destined for the ring.; Ring transactions from the Cachebo destined for the IV ring. This is commonly used for snoops to the cores.
    UNC_C_TxR_STARVED.AD_CORE Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.; cycles that the core AD egress spent in starvation
    UNC_C_TxR_STARVED.AK_BOTH Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.; cycles that both AK egresses spent in starvation
    UNC_C_TxR_STARVED.BL_BOTH Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.; cycles that both BL egresses spent in starvation
    UNC_C_TxR_STARVED.IV Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.; cycles that the cachebo IV egress spent in starvation
    UNC_H_ADDR_OPC_MATCH.AD QPI Address/Opcode Match; AD Opcodes
    UNC_H_ADDR_OPC_MATCH.ADDR QPI Address/Opcode Match; Address
    UNC_H_ADDR_OPC_MATCH.AK QPI Address/Opcode Match; AK Opcodes
    UNC_H_ADDR_OPC_MATCH.BL QPI Address/Opcode Match; BL Opcodes
    UNC_H_ADDR_OPC_MATCH.FILT QPI Address/Opcode Match; Address & Opcode Match
    UNC_H_ADDR_OPC_MATCH.OPC QPI Address/Opcode Match; Opcode
    UNC_H_BT_CYCLES_NE Cycles the Backup Tracker (BT) is not empty. The BT is the actual HOM tracker in IVT.
    UNC_H_BT_TO_HT_NOT_ISSUED.INCOMING_BL_HAZARD Counts the number of cycles when the HA does not issue transaction from BT to HT.; Cycles unable to issue from BT due to incoming BL data hazard
    UNC_H_BT_TO_HT_NOT_ISSUED.INCOMING_SNP_HAZARD Counts the number of cycles when the HA does not issue transaction from BT to HT.; Cycles unable to issue from BT due to incoming snoop hazard
    UNC_H_BT_TO_HT_NOT_ISSUED.RSPACKCFLT_HAZARD Counts the number of cycles when the HA does not issue transaction from BT to HT.; Cycles unable to issue from BT due to incoming BL data hazard
    UNC_H_BT_TO_HT_NOT_ISSUED.WBMDATA_HAZARD Counts the number of cycles when the HA does not issue transaction from BT to HT.; Cycles unable to issue from BT due to incoming BL data hazard
    UNC_H_BYPASS_IMC.NOT_TAKEN Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass.
    UNC_H_BYPASS_IMC.TAKEN Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the bypass.
    UNC_H_CLOCKTICKS Counts the number of uclks in the HA. This will be slightly different than the count in the Ubox because of enable/freeze delays. The HA is on the other side of the die from the fixed Ubox uclk counter, so the drift could be somewhat larger than in units that are closer like the QPI Agent.
    UNC_H_DIRECT2CORE_COUNT Number of Direct2Core messages sent
    UNC_H_DIRECT2CORE_CYCLES_DISABLED Number of cycles in which Direct2Core was disabled
    UNC_H_DIRECT2CORE_TXN_OVERRIDE Number of Reads where Direct2Core overridden
    UNC_H_DIRECTORY_LAT_OPT Directory Latency Optimization Data Return Path Taken. When directory mode is enabled and the directory retuned for a read is Dir=I, then data can be returned using a faster path if certain conditions are met (credits, free pipeline, etc).
    UNC_H_DIRECTORY_LOOKUP.NO_SNP Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to.; Filters for transactions that did not have to send any snoops because the directory bit was clear.
    UNC_H_DIRECTORY_LOOKUP.SNP Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to.; Filters for transactions that had to send one or more snoops because the directory bit was set.
    UNC_H_DIRECTORY_UPDATE.ANY Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears.
    UNC_H_DIRECTORY_UPDATE.CLEAR Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears.; Filter for directory clears. This occurs when snoops were sent and all returned with RspI.
    UNC_H_DIRECTORY_UPDATE.SET Counts the number of directory updates that were required. These result in writes to the memory controller. This can be filtered by directory sets and directory clears.; Filter for directory sets. This occurs when a remote read transaction requests memory, bringing it to a remote cache.
    UNC_H_HITME_HIT.ACKCNFLTWBI Counts Number of Hits in HitMe Cache; op is AckCnfltWbI
    UNC_H_HITME_HIT.ALL Counts Number of Hits in HitMe Cache; All Requests
    UNC_H_HITME_HIT.ALLOCS Counts Number of Hits in HitMe Cache; Allocations
    UNC_H_HITME_HIT.EVICTS Counts Number of Hits in HitMe Cache; Allocations
    UNC_H_HITME_HIT.HOM Counts Number of Hits in HitMe Cache; HOM Requests
    UNC_H_HITME_HIT.INVALS Counts Number of Hits in HitMe Cache; Invalidations
    UNC_H_HITME_HIT.READ_OR_INVITOE Counts Number of Hits in HitMe Cache; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE
    UNC_H_HITME_HIT.RSP Counts Number of Hits in HitMe Cache; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI
    UNC_H_HITME_HIT.RSPFWDI_LOCAL Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a local request
    UNC_H_HITME_HIT.RSPFWDI_REMOTE Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a remote request
    UNC_H_HITME_HIT.RSPFWDS Counts Number of Hits in HitMe Cache; op is RsSFwd or RspSFwdWb
    UNC_H_HITME_HIT.WBMTOE_OR_S Counts Number of Hits in HitMe Cache; op is WbMtoE or WbMtoS
    UNC_H_HITME_HIT.WBMTOI Counts Number of Hits in HitMe Cache; op is WbMtoI
    UNC_H_HITME_HIT_PV_BITS_SET.ACKCNFLTWBI Accumulates Number of PV bits set on HitMe Cache Hits; op is AckCnfltWbI
    UNC_H_HITME_HIT_PV_BITS_SET.ALL Accumulates Number of PV bits set on HitMe Cache Hits; All Requests
    UNC_H_HITME_HIT_PV_BITS_SET.HOM Accumulates Number of PV bits set on HitMe Cache Hits; HOM Requests
    UNC_H_HITME_HIT_PV_BITS_SET.READ_OR_INVITOE Accumulates Number of PV bits set on HitMe Cache Hits; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE
    UNC_H_HITME_HIT_PV_BITS_SET.RSP Accumulates Number of PV bits set on HitMe Cache Hits; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI
    UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDI_LOCAL Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a local request
    UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDI_REMOTE Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a remote request
    UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDS Accumulates Number of PV bits set on HitMe Cache Hits; op is RsSFwd or RspSFwdWb
    UNC_H_HITME_HIT_PV_BITS_SET.WBMTOE_OR_S Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoE or WbMtoS
    UNC_H_HITME_HIT_PV_BITS_SET.WBMTOI Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoI
    UNC_H_HITME_LOOKUP.ACKCNFLTWBI Counts Number of times HitMe Cache is accessed; op is AckCnfltWbI
    UNC_H_HITME_LOOKUP.ALL Counts Number of times HitMe Cache is accessed; All Requests
    UNC_H_HITME_LOOKUP.ALLOCS Counts Number of times HitMe Cache is accessed; Allocations
    UNC_H_HITME_LOOKUP.HOM Counts Number of times HitMe Cache is accessed; HOM Requests
    UNC_H_HITME_LOOKUP.INVALS Counts Number of times HitMe Cache is accessed; Invalidations
    UNC_H_HITME_LOOKUP.READ_OR_INVITOE Counts Number of times HitMe Cache is accessed; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE
    UNC_H_HITME_LOOKUP.RSP Counts Number of times HitMe Cache is accessed; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI
    UNC_H_HITME_LOOKUP.RSPFWDI_LOCAL Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a local request
    UNC_H_HITME_LOOKUP.RSPFWDI_REMOTE Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a remote request
    UNC_H_HITME_LOOKUP.RSPFWDS Counts Number of times HitMe Cache is accessed; op is RsSFwd or RspSFwdWb
    UNC_H_HITME_LOOKUP.WBMTOE_OR_S Counts Number of times HitMe Cache is accessed; op is WbMtoE or WbMtoS
    UNC_H_HITME_LOOKUP.WBMTOI Counts Number of times HitMe Cache is accessed; op is WbMtoI
    UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI0 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.
    UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI1 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.
    UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI2 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.
    UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI0 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.
    UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI1 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.
    UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI2 Counts the number of cycles when the HA does not have credits to send messages to the QPI Agent. This can be filtered by the different credit pools and the different links.
    UNC_H_IMC_READS.NORMAL Count of the number of reads issued to any of the memory controller channels. This can be filtered by the priority of the reads.
    UNC_H_IMC_RETRY Retry Events
    UNC_H_IMC_WRITES.ALL Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.
    UNC_H_IMC_WRITES.FULL Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.
    UNC_H_IMC_WRITES.FULL_ISOCH Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.
    UNC_H_IMC_WRITES.PARTIAL Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.
    UNC_H_IMC_WRITES.PARTIAL_ISOCH Counts the total number of full line writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.
    UNC_H_IOT_BACKPRESSURE.HUB IOT Backpressure
    UNC_H_IOT_BACKPRESSURE.SAT IOT Backpressure
    UNC_H_IOT_CTS_EAST_LO.CTS0 Debug Mask/Match Tie-Ins
    UNC_H_IOT_CTS_EAST_LO.CTS1 Debug Mask/Match Tie-Ins
    UNC_H_IOT_CTS_HI.CTS2 Debug Mask/Match Tie-Ins
    UNC_H_IOT_CTS_HI.CTS3 Debug Mask/Match Tie-Ins
    UNC_H_IOT_CTS_WEST_LO.CTS0 Debug Mask/Match Tie-Ins
    UNC_H_IOT_CTS_WEST_LO.CTS1 Debug Mask/Match Tie-Ins
    UNC_H_OSB.CANCELLED Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.; OSB Snoop broadcast cancelled due to D2C or Other. OSB cancel is counted when OSB local read is not allowed even when the transaction in local InItoE. It also counts D2C OSB cancel, but also includes the cases were D2C was not set in the first place for the transaction coming from the ring.
    UNC_H_OSB.INVITOE_LOCAL Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.
    UNC_H_OSB.READS_LOCAL Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.
    UNC_H_OSB.READS_LOCAL_USEFUL Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.
    UNC_H_OSB.REMOTE Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.
    UNC_H_OSB.REMOTE_USEFUL Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.
    UNC_H_OSB_EDR.ALL Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data return
    UNC_H_OSB_EDR.READS_LOCAL_I Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data return
    UNC_H_OSB_EDR.READS_LOCAL_S Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data return
    UNC_H_OSB_EDR.READS_REMOTE_I Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data return
    UNC_H_OSB_EDR.READS_REMOTE_S Counts the number of transactions that broadcast snoop due to OSB, but found clean data in memory and was able to do early data return
    UNC_H_REQUESTS.INVITOE_LOCAL Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only InvItoEs coming from the local socket.
    UNC_H_REQUESTS.INVITOE_REMOTE Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only InvItoEs coming from remote sockets.
    UNC_H_REQUESTS.READS Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).; Incoming ead requests. This is a good proxy for LLC Read Misses (including RFOs).
    UNC_H_REQUESTS.READS_LOCAL Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only read requests coming from the local socket. This is a good proxy for LLC Read Misses (including RFOs) from the local socket.
    UNC_H_REQUESTS.READS_REMOTE Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only read requests coming from the remote socket. This is a good proxy for LLC Read Misses (including RFOs) from the remote socket.
    UNC_H_REQUESTS.WRITES Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).; Incoming write requests.
    UNC_H_REQUESTS.WRITES_LOCAL Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only writes coming from the local socket.
    UNC_H_REQUESTS.WRITES_REMOTE Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).; This filter includes only writes coming from remote sockets.
    UNC_H_RING_AD_USED.CCW Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_H_RING_AD_USED.CCW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_H_RING_AD_USED.CCW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_H_RING_AD_USED.CW Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_H_RING_AD_USED.CW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_H_RING_AD_USED.CW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_H_RING_AK_USED.CCW Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_H_RING_AK_USED.CCW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_H_RING_AK_USED.CCW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_H_RING_AK_USED.CW Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_H_RING_AK_USED.CW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_H_RING_AK_USED.CW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_H_RING_BL_USED.CCW Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_H_RING_BL_USED.CCW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_H_RING_BL_USED.CCW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_H_RING_BL_USED.CW Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_H_RING_BL_USED.CW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_H_RING_BL_USED.CW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN0 Counts the number of cycles when there are no "regular" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.
    UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN1 Counts the number of cycles when there are no "regular" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.
    UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN2 Counts the number of cycles when there are no "regular" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.
    UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN3 Counts the number of cycles when there are no "regular" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.
    UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN0 Counts the number of cycles when there are no "special" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.
    UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN1 Counts the number of cycles when there are no "special" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.
    UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN2 Counts the number of cycles when there are no "special" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.
    UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN3 Counts the number of cycles when there are no "special" credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and "special" requests such as ISOCH reads. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.
    UNC_H_SBO0_CREDIT_OCCUPANCY.AD Number of Sbo 0 credits in use in a given cycle, per ring.
    UNC_H_SBO0_CREDIT_OCCUPANCY.BL Number of Sbo 0 credits in use in a given cycle, per ring.
    UNC_H_SBO0_CREDITS_ACQUIRED.AD Number of Sbo 0 credits acquired in a given cycle, per ring.
    UNC_H_SBO0_CREDITS_ACQUIRED.BL Number of Sbo 0 credits acquired in a given cycle, per ring.
    UNC_H_SBO1_CREDIT_OCCUPANCY.AD Number of Sbo 1 credits in use in a given cycle, per ring.
    UNC_H_SBO1_CREDIT_OCCUPANCY.BL Number of Sbo 1 credits in use in a given cycle, per ring.
    UNC_H_SBO1_CREDITS_ACQUIRED.AD Number of Sbo 1 credits acquired in a given cycle, per ring.
    UNC_H_SBO1_CREDITS_ACQUIRED.BL Number of Sbo 1 credits acquired in a given cycle, per ring.
    UNC_H_SNOOP_CYCLES_NE.ALL Counts cycles when one or more snoops are outstanding.; Tracked for snoops from both local and remote sockets.
    UNC_H_SNOOP_CYCLES_NE.LOCAL Counts cycles when one or more snoops are outstanding.; This filter includes only requests coming from the local socket.
    UNC_H_SNOOP_CYCLES_NE.REMOTE Counts cycles when one or more snoops are outstanding.; This filter includes only requests coming from remote sockets.
    UNC_H_SNOOP_OCCUPANCY.LOCAL Accumulates the occupancy of either the local HA tracker pool that have snoops pending in every cycle. This can be used in conjection with the "not empty" stat to calculate average queue occupancy or the "allocations" stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if an HT (HomeTracker) entry is available and this occupancy is decremented when all the snoop responses have returned.; This filter includes only requests coming from the local socket.
    UNC_H_SNOOP_OCCUPANCY.REMOTE Accumulates the occupancy of either the local HA tracker pool that have snoops pending in every cycle. This can be used in conjection with the "not empty" stat to calculate average queue occupancy or the "allocations" stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if an HT (HomeTracker) entry is available and this occupancy is decremented when all the snoop responses have returned.; This filter includes only requests coming from remote sockets.
    UNC_H_SNOOP_RESP.RSP_FWD_WB Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of Rsp*Fwd*WB. This snoop response is only used in 4s systems. It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memory.
    UNC_H_SNOOP_RESP.RSP_WB Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspIWB or RspSWB. This is returned when a non-RFO request hits in M state. Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured. InvItoE transactions will also return RspIWB because they must acquire ownership.
    UNC_H_SNOOP_RESP.RSPCNFLCT Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoops responses of RspConflict. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent. This triggers conflict resolution hardware. This covers both RspCnflct and RspCnflctWbI.
    UNC_H_SNOOP_RESP.RSPI Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoops responses of RspI. RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data).
    UNC_H_SNOOP_RESP.RSPIFWD Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoop responses of RspIFwd. This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states. This is commonly returned with RFO transactions. It can be either a HitM or a HitFE.
    UNC_H_SNOOP_RESP.RSPS Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for snoop responses of RspS. RspS is returned when a remote cache has data but is not forwarding it. It is a way to let the requesting socket know that it cannot allocate the data in E state. No data is sent with S RspS.
    UNC_H_SNOOP_RESP.RSPSFWD Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspSFwd. This is returned when a remote caching agent forwards data but holds on to its currentl copy. This is common for data and code reads that hit in a remote socket in E or F state.
    UNC_H_SNOOPS_RSP_AFTER_DATA.LOCAL Counts the number of reads when the snoop was on the critical path to the data return.; This filter includes only requests coming from the local socket.
    UNC_H_SNOOPS_RSP_AFTER_DATA.REMOTE Counts the number of reads when the snoop was on the critical path to the data return.; This filter includes only requests coming from remote sockets.
    UNC_H_SNP_RESP_RECV_LOCAL.OTHER Number of snoop responses received for a Local request; Filters for all other snoop responses.
    UNC_H_SNP_RESP_RECV_LOCAL.RSPCNFLCT Number of snoop responses received for a Local request; Filters for snoops responses of RspConflict. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent. This triggers conflict resolution hardware. This covers both RspCnflct and RspCnflctWbI.
    UNC_H_SNP_RESP_RECV_LOCAL.RSPI Number of snoop responses received for a Local request; Filters for snoops responses of RspI. RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data).
    UNC_H_SNP_RESP_RECV_LOCAL.RSPIFWD Number of snoop responses received for a Local request; Filters for snoop responses of RspIFwd. This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states. This is commonly returned with RFO transactions. It can be either a HitM or a HitFE.
    UNC_H_SNP_RESP_RECV_LOCAL.RSPS Number of snoop responses received for a Local request; Filters for snoop responses of RspS. RspS is returned when a remote cache has data but is not forwarding it. It is a way to let the requesting socket know that it cannot allocate the data in E state. No data is sent with S RspS.
    UNC_H_SNP_RESP_RECV_LOCAL.RSPSFWD Number of snoop responses received for a Local request; Filters for a snoop response of RspSFwd. This is returned when a remote caching agent forwards data but holds on to its currentl copy. This is common for data and code reads that hit in a remote socket in E or F state.
    UNC_H_SNP_RESP_RECV_LOCAL.RSPxFWDxWB Number of snoop responses received for a Local request; Filters for a snoop response of Rsp*Fwd*WB. This snoop response is only used in 4s systems. It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memory.
    UNC_H_SNP_RESP_RECV_LOCAL.RSPxWB Number of snoop responses received for a Local request; Filters for a snoop response of RspIWB or RspSWB. This is returned when a non-RFO request hits in M state. Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured. InvItoE transactions will also return RspIWB because they must acquire ownership.
    UNC_H_STALL_NO_SBO_CREDIT.SBO0_AD Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_H_STALL_NO_SBO_CREDIT.SBO0_BL Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_H_STALL_NO_SBO_CREDIT.SBO1_AD Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_H_STALL_NO_SBO_CREDIT.SBO1_BL Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_H_TAD_REQUESTS_G0.REGION0 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 0
    UNC_H_TAD_REQUESTS_G0.REGION1 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 1
    UNC_H_TAD_REQUESTS_G0.REGION2 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 2
    UNC_H_TAD_REQUESTS_G0.REGION3 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 3
    UNC_H_TAD_REQUESTS_G0.REGION4 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 4
    UNC_H_TAD_REQUESTS_G0.REGION5 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 5
    UNC_H_TAD_REQUESTS_G0.REGION6 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 6
    UNC_H_TAD_REQUESTS_G0.REGION7 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 0 to 7. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 7
    UNC_H_TAD_REQUESTS_G1.REGION10 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 10
    UNC_H_TAD_REQUESTS_G1.REGION11 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 11
    UNC_H_TAD_REQUESTS_G1.REGION8 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 8
    UNC_H_TAD_REQUESTS_G1.REGION9 Counts the number of HA requests to a given TAD region. There are up to 11 TAD (target address decode) regions in each home agent. All requests destined for the memory controller must first be decoded to determine which TAD region they are in. This event is filtered based on the TAD region ID, and covers regions 8 to 10. This event is useful for understanding how applications are using the memory that is spread across the different memory regions. It is particularly useful for "Monroe" systems that use the TAD to enable individual channels to enter self-refresh to save power.; Filters request made to TAD Region 9
    UNC_H_TRACKER_CYCLES_FULL.ALL Counts the number of cycles when the local HA tracker pool is completely used. This can be used with edge detect to identify the number of situations when the pool became fully utilized. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, the system could be starved for RTIDs but not fill up the HA trackers. HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Counts the number of cycles when the HA tracker pool (HT) is completely used including reserved HT entries. It will not return valid count when BT is disabled.
    UNC_H_TRACKER_CYCLES_FULL.GP Counts the number of cycles when the local HA tracker pool is completely used. This can be used with edge detect to identify the number of situations when the pool became fully utilized. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, the system could be starved for RTIDs but not fill up the HA trackers. HA trackers are allocated as soon as a request enters the HA and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Counts the number of cycles when the general purpose (GP) HA tracker pool (HT) is completely used. It will not return valid count when BT is disabled.
    UNC_H_TRACKER_CYCLES_NE.ALL Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Requests coming from both local and remote sockets.
    UNC_H_TRACKER_CYCLES_NE.LOCAL Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from the local socket.
    UNC_H_TRACKER_CYCLES_NE.REMOTE Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from remote sockets.
    UNC_H_TRACKER_OCCUPANCY.INVITOE_LOCAL Accumulates the occupancy of the local HA tracker pool in every cycle. This can be used in conjection with the "not empty" stat to calculate average queue occupancy or the "allocations" stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.
    UNC_H_TRACKER_OCCUPANCY.INVITOE_REMOTE Accumulates the occupancy of the local HA tracker pool in every cycle. This can be used in conjection with the "not empty" stat to calculate average queue occupancy or the "allocations" stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.
    UNC_H_TRACKER_OCCUPANCY.READS_LOCAL Accumulates the occupancy of the local HA tracker pool in every cycle. This can be used in conjection with the "not empty" stat to calculate average queue occupancy or the "allocations" stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.
    UNC_H_TRACKER_OCCUPANCY.READS_REMOTE Accumulates the occupancy of the local HA tracker pool in every cycle. This can be used in conjection with the "not empty" stat to calculate average queue occupancy or the "allocations" stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.
    UNC_H_TRACKER_OCCUPANCY.WRITES_LOCAL Accumulates the occupancy of the local HA tracker pool in every cycle. This can be used in conjection with the "not empty" stat to calculate average queue occupancy or the "allocations" stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.
    UNC_H_TRACKER_OCCUPANCY.WRITES_REMOTE Accumulates the occupancy of the local HA tracker pool in every cycle. This can be used in conjection with the "not empty" stat to calculate average queue occupancy or the "allocations" stat in order to calculate average queue latency. HA trackers are allocated as soon as a request enters the HA if a HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.
    UNC_H_TRACKER_PENDING_OCCUPANCY.LOCAL Accumulates the number of transactions that have data from the memory controller until they get scheduled to the Egress. This can be used to calculate the queuing latency for two things. (1) If the system is waiting for snoops, this will increase. (2) If the system can't schedule to the Egress because of either (a) Egress Credits or (b) QPI BL IGR credits for remote requests.; This filter includes only requests coming from the local socket.
    UNC_H_TRACKER_PENDING_OCCUPANCY.REMOTE Accumulates the number of transactions that have data from the memory controller until they get scheduled to the Egress. This can be used to calculate the queuing latency for two things. (1) If the system is waiting for snoops, this will increase. (2) If the system can't schedule to the Egress because of either (a) Egress Credits or (b) QPI BL IGR credits for remote requests.; This filter includes only requests coming from remote sockets.
    UNC_H_TxR_AD.HOM Counts the number of outbound transactions on the AD ring. This can be filtered by the NDR and SNP message classes. See the filter descriptions for more details.; Filter for outbound NDR transactions sent on the AD ring. NDR stands for "non-data response" and is generally used for completions that do not include data. AD NDR is used for transactions to remote sockets.
    UNC_H_TxR_AD_CYCLES_FULL.ALL AD Egress Full; Cycles full from both schedulers
    UNC_H_TxR_AD_CYCLES_FULL.SCHED0 AD Egress Full; Filter for cycles full from scheduler bank 0
    UNC_H_TxR_AD_CYCLES_FULL.SCHED1 AD Egress Full; Filter for cycles full from scheduler bank 1
    UNC_H_TxR_AD_CYCLES_NE.ALL AD Egress Not Empty; Cycles full from both schedulers
    UNC_H_TxR_AD_CYCLES_NE.SCHED0 AD Egress Not Empty; Filter for cycles not empty from scheduler bank 0
    UNC_H_TxR_AD_CYCLES_NE.SCHED1 AD Egress Not Empty; Filter for cycles not empty from scheduler bank 1
    UNC_H_TxR_AD_INSERTS.ALL AD Egress Allocations; Allocations from both schedulers
    UNC_H_TxR_AD_INSERTS.SCHED0 AD Egress Allocations; Filter for allocations from scheduler bank 0
    UNC_H_TxR_AD_INSERTS.SCHED1 AD Egress Allocations; Filter for allocations from scheduler bank 1
    UNC_H_TxR_AK_CYCLES_FULL.ALL AK Egress Full; Cycles full from both schedulers
    UNC_H_TxR_AK_CYCLES_FULL.SCHED0 AK Egress Full; Filter for cycles full from scheduler bank 0
    UNC_H_TxR_AK_CYCLES_FULL.SCHED1 AK Egress Full; Filter for cycles full from scheduler bank 1
    UNC_H_TxR_AK_CYCLES_NE.ALL AK Egress Not Empty; Cycles full from both schedulers
    UNC_H_TxR_AK_CYCLES_NE.SCHED0 AK Egress Not Empty; Filter for cycles not empty from scheduler bank 0
    UNC_H_TxR_AK_CYCLES_NE.SCHED1 AK Egress Not Empty; Filter for cycles not empty from scheduler bank 1
    UNC_H_TxR_AK_INSERTS.ALL AK Egress Allocations; Allocations from both schedulers
    UNC_H_TxR_AK_INSERTS.SCHED0 AK Egress Allocations; Filter for allocations from scheduler bank 0
    UNC_H_TxR_AK_INSERTS.SCHED1 AK Egress Allocations; Filter for allocations from scheduler bank 1
    UNC_H_TxR_BL.DRS_CACHE Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.; Filter for data being sent to the cache.
    UNC_H_TxR_BL.DRS_CORE Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.; Filter for data being sent directly to the requesting core.
    UNC_H_TxR_BL.DRS_QPI Counts the number of DRS messages sent out on the BL ring. This can be filtered by the destination.; Filter for data being sent to a remote socket over QPI.
    UNC_H_TxR_BL_CYCLES_FULL.ALL BL Egress Full; Cycles full from both schedulers
    UNC_H_TxR_BL_CYCLES_FULL.SCHED0 BL Egress Full; Filter for cycles full from scheduler bank 0
    UNC_H_TxR_BL_CYCLES_FULL.SCHED1 BL Egress Full; Filter for cycles full from scheduler bank 1
    UNC_H_TxR_BL_CYCLES_NE.ALL BL Egress Not Empty; Cycles full from both schedulers
    UNC_H_TxR_BL_CYCLES_NE.SCHED0 BL Egress Not Empty; Filter for cycles not empty from scheduler bank 0
    UNC_H_TxR_BL_CYCLES_NE.SCHED1 BL Egress Not Empty; Filter for cycles not empty from scheduler bank 1
    UNC_H_TxR_BL_INSERTS.ALL BL Egress Allocations; Allocations from both schedulers
    UNC_H_TxR_BL_INSERTS.SCHED0 BL Egress Allocations; Filter for allocations from scheduler bank 0
    UNC_H_TxR_BL_INSERTS.SCHED1 BL Egress Allocations; Filter for allocations from scheduler bank 1
    UNC_H_TxR_STARVED.AK Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.
    UNC_H_TxR_STARVED.BL Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.
    UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN0 Counts the number of cycles when there are no "regular" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.
    UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN1 Counts the number of cycles when there are no "regular" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.
    UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN2 Counts the number of cycles when there are no "regular" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.
    UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN3 Counts the number of cycles when there are no "regular" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.
    UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN0 Counts the number of cycles when there are no "special" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.
    UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN1 Counts the number of cycles when there are no "special" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.
    UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN2 Counts the number of cycles when there are no "special" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.
    UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN3 Counts the number of cycles when there are no "special" credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and "special" requests such as ISOCH writes. This count only tracks the "special" credits. This statistic is generally not interesting for general IA workloads, but may be of interest for understanding the characteristics of systems using ISOCH. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.
    UNC_I_CACHE_TOTAL_OCCUPANCY.ANY Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events.; Tracks all requests from any source port.
    UNC_I_CACHE_TOTAL_OCCUPANCY.SOURCE Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events.; Tracks only those requests that come from the port specified in the IRP_PmonFilter.OrderingQ register. This register allows one to select one specific queue. It is not possible to monitor multiple queues at a time.
    UNC_I_CLOCKTICKS Number of clocks in the IRP.
    UNC_I_COHERENT_OPS.CLFLUSH Counts the number of coherency related operations servied by the IRP
    UNC_I_COHERENT_OPS.CRD Counts the number of coherency related operations servied by the IRP
    UNC_I_COHERENT_OPS.DRD Counts the number of coherency related operations servied by the IRP
    UNC_I_COHERENT_OPS.PCIDCAHINT Counts the number of coherency related operations servied by the IRP
    UNC_I_COHERENT_OPS.PCIRDCUR Counts the number of coherency related operations servied by the IRP
    UNC_I_COHERENT_OPS.PCITOM Counts the number of coherency related operations servied by the IRP
    UNC_I_COHERENT_OPS.RFO Counts the number of coherency related operations servied by the IRP
    UNC_I_COHERENT_OPS.WBMTOI Counts the number of coherency related operations servied by the IRP
    UNC_I_MISC0.2ND_ATOMIC_INSERT Misc Events - Set 0; Cache Inserts of Atomic Transactions as Secondary
    UNC_I_MISC0.2ND_RD_INSERT Misc Events - Set 0; Cache Inserts of Read Transactions as Secondary
    UNC_I_MISC0.2ND_WR_INSERT Misc Events - Set 0; Cache Inserts of Write Transactions as Secondary
    UNC_I_MISC0.FAST_REJ Misc Events - Set 0; Fastpath Rejects
    UNC_I_MISC0.FAST_REQ Misc Events - Set 0; Fastpath Requests
    UNC_I_MISC0.FAST_XFER Misc Events - Set 0; Fastpath Transfers From Primary to Secondary
    UNC_I_MISC0.PF_ACK_HINT Misc Events - Set 0; Prefetch Ack Hints From Primary to Secondary
    UNC_I_MISC0.PF_TIMEOUT Indicates the fetch for a previous prefetch wasn't accepted by the prefetch. This happens in the case of a prefetch TimeOut
    UNC_I_MISC1.DATA_THROTTLE IRP throttled switch data
    UNC_I_MISC1.LOST_FWD Misc Events - Set 1
    UNC_I_MISC1.SEC_RCVD_INVLD Secondary received a transfer that did not have sufficient MESI state
    UNC_I_MISC1.SEC_RCVD_VLD Secondary received a transfer that did have sufficient MESI state
    UNC_I_MISC1.SLOW_E Secondary received a transfer that did have sufficient MESI state
    UNC_I_MISC1.SLOW_I Snoop took cacheline ownership before write from data was committed.
    UNC_I_MISC1.SLOW_M Snoop took cacheline ownership before write from data was committed.
    UNC_I_MISC1.SLOW_S Secondary received a transfer that did not have sufficient MESI state
    UNC_I_RxR_AK_INSERTS Counts the number of allocations into the AK Ingress. This queue is where the IRP receives responses from R2PCIe (the ring).
    UNC_I_RxR_BL_DRS_CYCLES_FULL Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_RxR_BL_DRS_INSERTS Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_RxR_BL_DRS_OCCUPANCY Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_RxR_BL_NCB_CYCLES_FULL Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_RxR_BL_NCB_INSERTS Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_RxR_BL_NCB_OCCUPANCY Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_RxR_BL_NCS_CYCLES_FULL Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_RxR_BL_NCS_INSERTS Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_RxR_BL_NCS_OCCUPANCY Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.
    UNC_I_SNOOP_RESP.HIT_ES Snoop Responses; Hit E or S
    UNC_I_SNOOP_RESP.HIT_I Snoop Responses; Hit I
    UNC_I_SNOOP_RESP.HIT_M Snoop Responses; Hit M
    UNC_I_SNOOP_RESP.MISS Snoop Responses; Miss
    UNC_I_SNOOP_RESP.SNPCODE Snoop Responses; SnpCode
    UNC_I_SNOOP_RESP.SNPDATA Snoop Responses; SnpData
    UNC_I_SNOOP_RESP.SNPINV Snoop Responses; SnpInv
    UNC_I_TRANSACTIONS.ATOMIC Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of atomic transactions
    UNC_I_TRANSACTIONS.ORDERINGQ Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only those requests that come from the port specified in the IRP_PmonFilter.OrderingQ register. This register allows one to select one specific queue. It is not possible to monitor multiple queues at a time. If this bit is not set, then requests from all sources will be counted.
    UNC_I_TRANSACTIONS.OTHER Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of 'other' kinds of transactions.
    UNC_I_TRANSACTIONS.RD_PREF Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of read prefetches.
    UNC_I_TRANSACTIONS.READS Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only read requests (not including read prefetches).
    UNC_I_TRANSACTIONS.WR_PREF Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of write prefetches.
    UNC_I_TRANSACTIONS.WRITES Counts the number of "Inbound" transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Trackes only write requests. Each write request should have a prefetch, so there is no need to explicitly track these requests. For writes that are tickled and have to retry, the counter will be incremented for each retry.
    UNC_I_TxR_AD_STALL_CREDIT_CYCLES Counts the number times when it is not possible to issue a request to the R2PCIe because there are no AD Egress Credits available.
    UNC_I_TxR_BL_STALL_CREDIT_CYCLES Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits available.
    UNC_I_TxR_DATA_INSERTS_NCB Counts the number of requests issued to the switch (towards the devices).
    UNC_I_TxR_DATA_INSERTS_NCS Counts the number of requests issued to the switch (towards the devices).
    UNC_I_TxR_REQUEST_OCCUPANCY Accumultes the number of outstanding outbound requests from the IRP to the switch (towards the devices). This can be used in conjuection with the allocations event in order to calculate average latency of outbound requests.
    UNC_M_ACT_COUNT.BYP Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.
    UNC_M_ACT_COUNT.RD Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.
    UNC_M_ACT_COUNT.WR Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.
    UNC_M_BYP_CMDS.ACT ACT command issued by 2 cycle bypass
    UNC_M_BYP_CMDS.CAS CAS command issued by 2 cycle bypass
    UNC_M_BYP_CMDS.PRE PRE command issued by 2 cycle bypass
    UNC_M_CAS_COUNT.ALL DRAM RD_CAS and WR_CAS Commands; Counts the total number of DRAM CAS commands issued on this channel.
    UNC_M_CAS_COUNT.RD DRAM RD_CAS and WR_CAS Commands; Counts the total number of DRAM Read CAS commands issued on this channel (including underfills).
    UNC_M_CAS_COUNT.RD_REG DRAM RD_CAS and WR_CAS Commands; Counts the total number or DRAM Read CAS commands issued on this channel. This includes both regular RD CAS commands as well as those with implicit Precharge. AutoPre is only used in systems that are using closed page policy. We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills).
    UNC_M_CAS_COUNT.RD_RMM DRAM RD_CAS and WR_CAS Commands
    UNC_M_CAS_COUNT.RD_UNDERFILL DRAM RD_CAS and WR_CAS Commands; Counts the number of underfill reads that are issued by the memory controller. This will generally be about the same as the number of partial writes, but may be slightly less because of partials hitting in the WPQ. While it is possible for underfills to be issed in both WMM and RMM, this event counts both.
    UNC_M_CAS_COUNT.RD_WMM DRAM RD_CAS and WR_CAS Commands
    UNC_M_CAS_COUNT.WR DRAM RD_CAS and WR_CAS Commands; Counts the total number of DRAM Write CAS commands issued on this channel.
    UNC_M_CAS_COUNT.WR_RMM DRAM RD_CAS and WR_CAS Commands; Counts the total number of Opportunistic" DRAM Write CAS commands issued on this channel while in Read-Major-Mode.
    UNC_M_CAS_COUNT.WR_WMM DRAM RD_CAS and WR_CAS Commands; Counts the total number or DRAM Write CAS commands issued on this channel while in Write-Major-Mode.
    UNC_M_CLOCKTICKS DRAM Clockticks
    UNC_M_DCLOCKTICKS DRAM Clockticks
    UNC_M_DRAM_PRE_ALL Counts the number of times that the precharge all command was sent.
    UNC_M_DRAM_REFRESH.HIGH Counts the number of refreshes issued.
    UNC_M_DRAM_REFRESH.PANIC Counts the number of refreshes issued.
    UNC_M_ECC_CORRECTABLE_ERRORS Counts the number of ECC errors detected and corrected by the iMC on this channel. This counter is only useful with ECC DRAM devices. This count will increment one time for each correction regardless of the number of bits corrected. The iMC can correct up to 4 bit errors in independent channel mode and 8 bit erros in lockstep mode.
    UNC_M_MAJOR_MODES.ISOCH Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; We group these two modes together so that we can use four counters to track each of the major modes at one time. These major modes are used whenever there is an ISOCH txn in the memory controller. In these mode, only ISOCH transactions are processed.
    UNC_M_MAJOR_MODES.PARTIAL Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; This major mode is used to drain starved underfill reads. Regular reads and writes are blocked and only underfill reads will be processed.
    UNC_M_MAJOR_MODES.READ Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; Read Major Mode is the default mode for the iMC, as reads are generally more critical to forward progress than writes.
    UNC_M_MAJOR_MODES.WRITE Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; This mode is triggered when the WPQ hits high occupancy and causes writes to be higher priority than reads. This can cause blips in the available read bandwidth in the system and temporarily increase read latencies in order to achieve better bus utilizations and higher bandwidth.
    UNC_M_POWER_CHANNEL_DLLOFF Number of cycles when all the ranks in the channel are in CKE Slow (DLLOFF) mode.
    UNC_M_POWER_CHANNEL_PPD Number of cycles when all the ranks in the channel are in PPD mode. If IBT=off is enabled, then this can be used to count those cycles. If it is not enabled, then this can count the number of cycles when that could have been taken advantage of.
    UNC_M_POWER_CKE_CYCLES.RANK0 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).
    UNC_M_POWER_CKE_CYCLES.RANK1 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).
    UNC_M_POWER_CKE_CYCLES.RANK2 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).
    UNC_M_POWER_CKE_CYCLES.RANK3 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).
    UNC_M_POWER_CKE_CYCLES.RANK4 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).
    UNC_M_POWER_CKE_CYCLES.RANK5 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).
    UNC_M_POWER_CKE_CYCLES.RANK6 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).
    UNC_M_POWER_CKE_CYCLES.RANK7 Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).
    UNC_M_POWER_CRITICAL_THROTTLE_CYCLES Counts the number of cycles when the iMC is in critical thermal throttling. When this happens, all traffic is blocked. This should be rare unless something bad is going on in the platform. There is no filtering by rank for this event.
    UNC_M_POWER_PCU_THROTTLING
    UNC_M_POWER_SELF_REFRESH Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock. This happens in some package C-states. For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Monroe technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases.
    UNC_M_POWER_THROTTLE_CYCLES.RANK0 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.; Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.
    UNC_M_POWER_THROTTLE_CYCLES.RANK1 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.
    UNC_M_POWER_THROTTLE_CYCLES.RANK2 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.
    UNC_M_POWER_THROTTLE_CYCLES.RANK3 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.
    UNC_M_POWER_THROTTLE_CYCLES.RANK4 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.
    UNC_M_POWER_THROTTLE_CYCLES.RANK5 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.
    UNC_M_POWER_THROTTLE_CYCLES.RANK6 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.
    UNC_M_POWER_THROTTLE_CYCLES.RANK7 Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.
    UNC_M_PRE_COUNT.BYP Counts the number of DRAM Precharge commands sent on this channel.
    UNC_M_PRE_COUNT.PAGE_CLOSE Counts the number of DRAM Precharge commands sent on this channel.; Counts the number of DRAM Precharge commands sent on this channel as a result of the page close counter expiring. This does not include implicit precharge commands sent in auto-precharge mode.
    UNC_M_PRE_COUNT.PAGE_MISS Counts the number of DRAM Precharge commands sent on this channel.; Counts the number of DRAM Precharge commands sent on this channel as a result of page misses. This does not include explicit precharge commands sent with CAS commands in Auto-Precharge mode. This does not include PRE commands sent as a result of the page close counter expiration.
    UNC_M_PRE_COUNT.RD Counts the number of DRAM Precharge commands sent on this channel.
    UNC_M_PRE_COUNT.WR Counts the number of DRAM Precharge commands sent on this channel.
    UNC_M_PREEMPTION.RD_PREEMPT_RD Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.; Filter for when a read preempts another read.
    UNC_M_PREEMPTION.RD_PREEMPT_WR Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.; Filter for when a read preempts a write.
    UNC_M_RD_CAS_PRIO.HIGH Read CAS issued with HIGH priority
    UNC_M_RD_CAS_PRIO.LOW Read CAS issued with LOW priority
    UNC_M_RD_CAS_PRIO.MED Read CAS issued with MEDIUM priority
    UNC_M_RD_CAS_PRIO.PANIC Read CAS issued with PANIC NON ISOCH priority (starved)
    UNC_M_RD_CAS_RANK0.ALLBANKS RD_CAS Access to Rank 0; All Banks
    UNC_M_RD_CAS_RANK0.BANK0 RD_CAS Access to Rank 0; Bank 0
    UNC_M_RD_CAS_RANK0.BANK1 RD_CAS Access to Rank 0; Bank 1
    UNC_M_RD_CAS_RANK0.BANK10 RD_CAS Access to Rank 0; Bank 10
    UNC_M_RD_CAS_RANK0.BANK11 RD_CAS Access to Rank 0; Bank 11
    UNC_M_RD_CAS_RANK0.BANK12 RD_CAS Access to Rank 0; Bank 12
    UNC_M_RD_CAS_RANK0.BANK13 RD_CAS Access to Rank 0; Bank 13
    UNC_M_RD_CAS_RANK0.BANK14 RD_CAS Access to Rank 0; Bank 14
    UNC_M_RD_CAS_RANK0.BANK15 RD_CAS Access to Rank 0; Bank 15
    UNC_M_RD_CAS_RANK0.BANK2 RD_CAS Access to Rank 0; Bank 2
    UNC_M_RD_CAS_RANK0.BANK3 RD_CAS Access to Rank 0; Bank 3
    UNC_M_RD_CAS_RANK0.BANK4 RD_CAS Access to Rank 0; Bank 4
    UNC_M_RD_CAS_RANK0.BANK5 RD_CAS Access to Rank 0; Bank 5
    UNC_M_RD_CAS_RANK0.BANK6 RD_CAS Access to Rank 0; Bank 6
    UNC_M_RD_CAS_RANK0.BANK7 RD_CAS Access to Rank 0; Bank 7
    UNC_M_RD_CAS_RANK0.BANK8 RD_CAS Access to Rank 0; Bank 8
    UNC_M_RD_CAS_RANK0.BANK9 RD_CAS Access to Rank 0; Bank 9
    UNC_M_RD_CAS_RANK0.BANKG0 RD_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)
    UNC_M_RD_CAS_RANK0.BANKG1 RD_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)
    UNC_M_RD_CAS_RANK0.BANKG2 RD_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)
    UNC_M_RD_CAS_RANK0.BANKG3 RD_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)
    UNC_M_RD_CAS_RANK1.ALLBANKS RD_CAS Access to Rank 1; All Banks
    UNC_M_RD_CAS_RANK1.BANK0 RD_CAS Access to Rank 1; Bank 0
    UNC_M_RD_CAS_RANK1.BANK1 RD_CAS Access to Rank 1; Bank 1
    UNC_M_RD_CAS_RANK1.BANK10 RD_CAS Access to Rank 1; Bank 10
    UNC_M_RD_CAS_RANK1.BANK11 RD_CAS Access to Rank 1; Bank 11
    UNC_M_RD_CAS_RANK1.BANK12 RD_CAS Access to Rank 1; Bank 12
    UNC_M_RD_CAS_RANK1.BANK13 RD_CAS Access to Rank 1; Bank 13
    UNC_M_RD_CAS_RANK1.BANK14 RD_CAS Access to Rank 1; Bank 14
    UNC_M_RD_CAS_RANK1.BANK15 RD_CAS Access to Rank 1; Bank 15
    UNC_M_RD_CAS_RANK1.BANK2 RD_CAS Access to Rank 1; Bank 2
    UNC_M_RD_CAS_RANK1.BANK3 RD_CAS Access to Rank 1; Bank 3
    UNC_M_RD_CAS_RANK1.BANK4 RD_CAS Access to Rank 1; Bank 4
    UNC_M_RD_CAS_RANK1.BANK5 RD_CAS Access to Rank 1; Bank 5
    UNC_M_RD_CAS_RANK1.BANK6 RD_CAS Access to Rank 1; Bank 6
    UNC_M_RD_CAS_RANK1.BANK7 RD_CAS Access to Rank 1; Bank 7
    UNC_M_RD_CAS_RANK1.BANK8 RD_CAS Access to Rank 1; Bank 8
    UNC_M_RD_CAS_RANK1.BANK9 RD_CAS Access to Rank 1; Bank 9
    UNC_M_RD_CAS_RANK1.BANKG0 RD_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)
    UNC_M_RD_CAS_RANK1.BANKG1 RD_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)
    UNC_M_RD_CAS_RANK1.BANKG2 RD_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)
    UNC_M_RD_CAS_RANK1.BANKG3 RD_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)
    UNC_M_RD_CAS_RANK2.BANK0 RD_CAS Access to Rank 2; Bank 0
    UNC_M_RD_CAS_RANK4.ALLBANKS RD_CAS Access to Rank 4; All Banks
    UNC_M_RD_CAS_RANK4.BANK0 RD_CAS Access to Rank 4; Bank 0
    UNC_M_RD_CAS_RANK4.BANK1 RD_CAS Access to Rank 4; Bank 1
    UNC_M_RD_CAS_RANK4.BANK10 RD_CAS Access to Rank 4; Bank 10
    UNC_M_RD_CAS_RANK4.BANK11 RD_CAS Access to Rank 4; Bank 11
    UNC_M_RD_CAS_RANK4.BANK12 RD_CAS Access to Rank 4; Bank 12
    UNC_M_RD_CAS_RANK4.BANK13 RD_CAS Access to Rank 4; Bank 13
    UNC_M_RD_CAS_RANK4.BANK14 RD_CAS Access to Rank 4; Bank 14
    UNC_M_RD_CAS_RANK4.BANK15 RD_CAS Access to Rank 4; Bank 15
    UNC_M_RD_CAS_RANK4.BANK2 RD_CAS Access to Rank 4; Bank 2
    UNC_M_RD_CAS_RANK4.BANK3 RD_CAS Access to Rank 4; Bank 3
    UNC_M_RD_CAS_RANK4.BANK4 RD_CAS Access to Rank 4; Bank 4
    UNC_M_RD_CAS_RANK4.BANK5 RD_CAS Access to Rank 4; Bank 5
    UNC_M_RD_CAS_RANK4.BANK6 RD_CAS Access to Rank 4; Bank 6
    UNC_M_RD_CAS_RANK4.BANK7 RD_CAS Access to Rank 4; Bank 7
    UNC_M_RD_CAS_RANK4.BANK8 RD_CAS Access to Rank 4; Bank 8
    UNC_M_RD_CAS_RANK4.BANK9 RD_CAS Access to Rank 4; Bank 9
    UNC_M_RD_CAS_RANK4.BANKG0 RD_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)
    UNC_M_RD_CAS_RANK4.BANKG1 RD_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)
    UNC_M_RD_CAS_RANK4.BANKG2 RD_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)
    UNC_M_RD_CAS_RANK4.BANKG3 RD_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)
    UNC_M_RD_CAS_RANK5.ALLBANKS RD_CAS Access to Rank 5; All Banks
    UNC_M_RD_CAS_RANK5.BANK0 RD_CAS Access to Rank 5; Bank 0
    UNC_M_RD_CAS_RANK5.BANK1 RD_CAS Access to Rank 5; Bank 1
    UNC_M_RD_CAS_RANK5.BANK10 RD_CAS Access to Rank 5; Bank 10
    UNC_M_RD_CAS_RANK5.BANK11 RD_CAS Access to Rank 5; Bank 11
    UNC_M_RD_CAS_RANK5.BANK12 RD_CAS Access to Rank 5; Bank 12
    UNC_M_RD_CAS_RANK5.BANK13 RD_CAS Access to Rank 5; Bank 13
    UNC_M_RD_CAS_RANK5.BANK14 RD_CAS Access to Rank 5; Bank 14
    UNC_M_RD_CAS_RANK5.BANK15 RD_CAS Access to Rank 5; Bank 15
    UNC_M_RD_CAS_RANK5.BANK2 RD_CAS Access to Rank 5; Bank 2
    UNC_M_RD_CAS_RANK5.BANK3 RD_CAS Access to Rank 5; Bank 3
    UNC_M_RD_CAS_RANK5.BANK4 RD_CAS Access to Rank 5; Bank 4
    UNC_M_RD_CAS_RANK5.BANK5 RD_CAS Access to Rank 5; Bank 5
    UNC_M_RD_CAS_RANK5.BANK6 RD_CAS Access to Rank 5; Bank 6
    UNC_M_RD_CAS_RANK5.BANK7 RD_CAS Access to Rank 5; Bank 7
    UNC_M_RD_CAS_RANK5.BANK8 RD_CAS Access to Rank 5; Bank 8
    UNC_M_RD_CAS_RANK5.BANK9 RD_CAS Access to Rank 5; Bank 9
    UNC_M_RD_CAS_RANK5.BANKG0 RD_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)
    UNC_M_RD_CAS_RANK5.BANKG1 RD_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)
    UNC_M_RD_CAS_RANK5.BANKG2 RD_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)
    UNC_M_RD_CAS_RANK5.BANKG3 RD_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)
    UNC_M_RD_CAS_RANK6.ALLBANKS RD_CAS Access to Rank 6; All Banks
    UNC_M_RD_CAS_RANK6.BANK0 RD_CAS Access to Rank 6; Bank 0
    UNC_M_RD_CAS_RANK6.BANK1 RD_CAS Access to Rank 6; Bank 1
    UNC_M_RD_CAS_RANK6.BANK10 RD_CAS Access to Rank 6; Bank 10
    UNC_M_RD_CAS_RANK6.BANK11 RD_CAS Access to Rank 6; Bank 11
    UNC_M_RD_CAS_RANK6.BANK12 RD_CAS Access to Rank 6; Bank 12
    UNC_M_RD_CAS_RANK6.BANK13 RD_CAS Access to Rank 6; Bank 13
    UNC_M_RD_CAS_RANK6.BANK14 RD_CAS Access to Rank 6; Bank 14
    UNC_M_RD_CAS_RANK6.BANK15 RD_CAS Access to Rank 6; Bank 15
    UNC_M_RD_CAS_RANK6.BANK2 RD_CAS Access to Rank 6; Bank 2
    UNC_M_RD_CAS_RANK6.BANK3 RD_CAS Access to Rank 6; Bank 3
    UNC_M_RD_CAS_RANK6.BANK4 RD_CAS Access to Rank 6; Bank 4
    UNC_M_RD_CAS_RANK6.BANK5 RD_CAS Access to Rank 6; Bank 5
    UNC_M_RD_CAS_RANK6.BANK6 RD_CAS Access to Rank 6; Bank 6
    UNC_M_RD_CAS_RANK6.BANK7 RD_CAS Access to Rank 6; Bank 7
    UNC_M_RD_CAS_RANK6.BANK8 RD_CAS Access to Rank 6; Bank 8
    UNC_M_RD_CAS_RANK6.BANK9 RD_CAS Access to Rank 6; Bank 9
    UNC_M_RD_CAS_RANK6.BANKG0 RD_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)
    UNC_M_RD_CAS_RANK6.BANKG1 RD_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)
    UNC_M_RD_CAS_RANK6.BANKG2 RD_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)
    UNC_M_RD_CAS_RANK6.BANKG3 RD_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)
    UNC_M_RD_CAS_RANK7.ALLBANKS RD_CAS Access to Rank 7; All Banks
    UNC_M_RD_CAS_RANK7.BANK0 RD_CAS Access to Rank 7; Bank 0
    UNC_M_RD_CAS_RANK7.BANK1 RD_CAS Access to Rank 7; Bank 1
    UNC_M_RD_CAS_RANK7.BANK10 RD_CAS Access to Rank 7; Bank 10
    UNC_M_RD_CAS_RANK7.BANK11 RD_CAS Access to Rank 7; Bank 11
    UNC_M_RD_CAS_RANK7.BANK12 RD_CAS Access to Rank 7; Bank 12
    UNC_M_RD_CAS_RANK7.BANK13 RD_CAS Access to Rank 7; Bank 13
    UNC_M_RD_CAS_RANK7.BANK14 RD_CAS Access to Rank 7; Bank 14
    UNC_M_RD_CAS_RANK7.BANK15 RD_CAS Access to Rank 7; Bank 15
    UNC_M_RD_CAS_RANK7.BANK2 RD_CAS Access to Rank 7; Bank 2
    UNC_M_RD_CAS_RANK7.BANK3 RD_CAS Access to Rank 7; Bank 3
    UNC_M_RD_CAS_RANK7.BANK4 RD_CAS Access to Rank 7; Bank 4
    UNC_M_RD_CAS_RANK7.BANK5 RD_CAS Access to Rank 7; Bank 5
    UNC_M_RD_CAS_RANK7.BANK6 RD_CAS Access to Rank 7; Bank 6
    UNC_M_RD_CAS_RANK7.BANK7 RD_CAS Access to Rank 7; Bank 7
    UNC_M_RD_CAS_RANK7.BANK8 RD_CAS Access to Rank 7; Bank 8
    UNC_M_RD_CAS_RANK7.BANK9 RD_CAS Access to Rank 7; Bank 9
    UNC_M_RD_CAS_RANK7.BANKG0 RD_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)
    UNC_M_RD_CAS_RANK7.BANKG1 RD_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)
    UNC_M_RD_CAS_RANK7.BANKG2 RD_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)
    UNC_M_RD_CAS_RANK7.BANKG3 RD_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)
    UNC_M_RPQ_CYCLES_NE Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.
    UNC_M_RPQ_INSERTS Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.
    UNC_M_VMSE_MXB_WR_OCCUPANCY VMSE MXB write buffer occupancy
    UNC_M_VMSE_WR_PUSH.RMM VMSE WR PUSH issued; VMSE write PUSH issued in RMM
    UNC_M_VMSE_WR_PUSH.WMM VMSE WR PUSH issued; VMSE write PUSH issued in WMM
    UNC_M_WMM_TO_RMM.LOW_THRESH Transition from WMM to RMM because of low threshold; Transition from WMM to RMM because of starve counter
    UNC_M_WMM_TO_RMM.STARVE Transition from WMM to RMM because of low threshold
    UNC_M_WMM_TO_RMM.VMSE_RETRY Transition from WMM to RMM because of low threshold
    UNC_M_WPQ_CYCLES_FULL Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no WPQ credits, just somewhat smaller to account for the credit return overhead.
    UNC_M_WPQ_CYCLES_NE Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have "posted" to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.
    UNC_M_WPQ_READ_HIT Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.
    UNC_M_WPQ_WRITE_HIT Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.
    UNC_M_WR_CAS_RANK0.ALLBANKS WR_CAS Access to Rank 0; All Banks
    UNC_M_WR_CAS_RANK0.BANK0 WR_CAS Access to Rank 0; Bank 0
    UNC_M_WR_CAS_RANK0.BANK1 WR_CAS Access to Rank 0; Bank 1
    UNC_M_WR_CAS_RANK0.BANK10 WR_CAS Access to Rank 0; Bank 10
    UNC_M_WR_CAS_RANK0.BANK11 WR_CAS Access to Rank 0; Bank 11
    UNC_M_WR_CAS_RANK0.BANK12 WR_CAS Access to Rank 0; Bank 12
    UNC_M_WR_CAS_RANK0.BANK13 WR_CAS Access to Rank 0; Bank 13
    UNC_M_WR_CAS_RANK0.BANK14 WR_CAS Access to Rank 0; Bank 14
    UNC_M_WR_CAS_RANK0.BANK15 WR_CAS Access to Rank 0; Bank 15
    UNC_M_WR_CAS_RANK0.BANK2 WR_CAS Access to Rank 0; Bank 2
    UNC_M_WR_CAS_RANK0.BANK3 WR_CAS Access to Rank 0; Bank 3
    UNC_M_WR_CAS_RANK0.BANK4 WR_CAS Access to Rank 0; Bank 4
    UNC_M_WR_CAS_RANK0.BANK5 WR_CAS Access to Rank 0; Bank 5
    UNC_M_WR_CAS_RANK0.BANK6 WR_CAS Access to Rank 0; Bank 6
    UNC_M_WR_CAS_RANK0.BANK7 WR_CAS Access to Rank 0; Bank 7
    UNC_M_WR_CAS_RANK0.BANK8 WR_CAS Access to Rank 0; Bank 8
    UNC_M_WR_CAS_RANK0.BANK9 WR_CAS Access to Rank 0; Bank 9
    UNC_M_WR_CAS_RANK0.BANKG0 WR_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)
    UNC_M_WR_CAS_RANK0.BANKG1 WR_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)
    UNC_M_WR_CAS_RANK0.BANKG2 WR_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)
    UNC_M_WR_CAS_RANK0.BANKG3 WR_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)
    UNC_M_WR_CAS_RANK1.ALLBANKS WR_CAS Access to Rank 1; All Banks
    UNC_M_WR_CAS_RANK1.BANK0 WR_CAS Access to Rank 1; Bank 0
    UNC_M_WR_CAS_RANK1.BANK1 WR_CAS Access to Rank 1; Bank 1
    UNC_M_WR_CAS_RANK1.BANK10 WR_CAS Access to Rank 1; Bank 10
    UNC_M_WR_CAS_RANK1.BANK11 WR_CAS Access to Rank 1; Bank 11
    UNC_M_WR_CAS_RANK1.BANK12 WR_CAS Access to Rank 1; Bank 12
    UNC_M_WR_CAS_RANK1.BANK13 WR_CAS Access to Rank 1; Bank 13
    UNC_M_WR_CAS_RANK1.BANK14 WR_CAS Access to Rank 1; Bank 14
    UNC_M_WR_CAS_RANK1.BANK15 WR_CAS Access to Rank 1; Bank 15
    UNC_M_WR_CAS_RANK1.BANK2 WR_CAS Access to Rank 1; Bank 2
    UNC_M_WR_CAS_RANK1.BANK3 WR_CAS Access to Rank 1; Bank 3
    UNC_M_WR_CAS_RANK1.BANK4 WR_CAS Access to Rank 1; Bank 4
    UNC_M_WR_CAS_RANK1.BANK5 WR_CAS Access to Rank 1; Bank 5
    UNC_M_WR_CAS_RANK1.BANK6 WR_CAS Access to Rank 1; Bank 6
    UNC_M_WR_CAS_RANK1.BANK7 WR_CAS Access to Rank 1; Bank 7
    UNC_M_WR_CAS_RANK1.BANK8 WR_CAS Access to Rank 1; Bank 8
    UNC_M_WR_CAS_RANK1.BANK9 WR_CAS Access to Rank 1; Bank 9
    UNC_M_WR_CAS_RANK1.BANKG0 WR_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)
    UNC_M_WR_CAS_RANK1.BANKG1 WR_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)
    UNC_M_WR_CAS_RANK1.BANKG2 WR_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)
    UNC_M_WR_CAS_RANK1.BANKG3 WR_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)
    UNC_M_WR_CAS_RANK4.ALLBANKS WR_CAS Access to Rank 4; All Banks
    UNC_M_WR_CAS_RANK4.BANK0 WR_CAS Access to Rank 4; Bank 0
    UNC_M_WR_CAS_RANK4.BANK1 WR_CAS Access to Rank 4; Bank 1
    UNC_M_WR_CAS_RANK4.BANK10 WR_CAS Access to Rank 4; Bank 10
    UNC_M_WR_CAS_RANK4.BANK11 WR_CAS Access to Rank 4; Bank 11
    UNC_M_WR_CAS_RANK4.BANK12 WR_CAS Access to Rank 4; Bank 12
    UNC_M_WR_CAS_RANK4.BANK13 WR_CAS Access to Rank 4; Bank 13
    UNC_M_WR_CAS_RANK4.BANK14 WR_CAS Access to Rank 4; Bank 14
    UNC_M_WR_CAS_RANK4.BANK15 WR_CAS Access to Rank 4; Bank 15
    UNC_M_WR_CAS_RANK4.BANK2 WR_CAS Access to Rank 4; Bank 2
    UNC_M_WR_CAS_RANK4.BANK3 WR_CAS Access to Rank 4; Bank 3
    UNC_M_WR_CAS_RANK4.BANK4 WR_CAS Access to Rank 4; Bank 4
    UNC_M_WR_CAS_RANK4.BANK5 WR_CAS Access to Rank 4; Bank 5
    UNC_M_WR_CAS_RANK4.BANK6 WR_CAS Access to Rank 4; Bank 6
    UNC_M_WR_CAS_RANK4.BANK7 WR_CAS Access to Rank 4; Bank 7
    UNC_M_WR_CAS_RANK4.BANK8 WR_CAS Access to Rank 4; Bank 8
    UNC_M_WR_CAS_RANK4.BANK9 WR_CAS Access to Rank 4; Bank 9
    UNC_M_WR_CAS_RANK4.BANKG0 WR_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)
    UNC_M_WR_CAS_RANK4.BANKG1 WR_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)
    UNC_M_WR_CAS_RANK4.BANKG2 WR_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)
    UNC_M_WR_CAS_RANK4.BANKG3 WR_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)
    UNC_M_WR_CAS_RANK5.ALLBANKS WR_CAS Access to Rank 5; All Banks
    UNC_M_WR_CAS_RANK5.BANK0 WR_CAS Access to Rank 5; Bank 0
    UNC_M_WR_CAS_RANK5.BANK1 WR_CAS Access to Rank 5; Bank 1
    UNC_M_WR_CAS_RANK5.BANK10 WR_CAS Access to Rank 5; Bank 10
    UNC_M_WR_CAS_RANK5.BANK11 WR_CAS Access to Rank 5; Bank 11
    UNC_M_WR_CAS_RANK5.BANK12 WR_CAS Access to Rank 5; Bank 12
    UNC_M_WR_CAS_RANK5.BANK13 WR_CAS Access to Rank 5; Bank 13
    UNC_M_WR_CAS_RANK5.BANK14 WR_CAS Access to Rank 5; Bank 14
    UNC_M_WR_CAS_RANK5.BANK15 WR_CAS Access to Rank 5; Bank 15
    UNC_M_WR_CAS_RANK5.BANK2 WR_CAS Access to Rank 5; Bank 2
    UNC_M_WR_CAS_RANK5.BANK3 WR_CAS Access to Rank 5; Bank 3
    UNC_M_WR_CAS_RANK5.BANK4 WR_CAS Access to Rank 5; Bank 4
    UNC_M_WR_CAS_RANK5.BANK5 WR_CAS Access to Rank 5; Bank 5
    UNC_M_WR_CAS_RANK5.BANK6 WR_CAS Access to Rank 5; Bank 6
    UNC_M_WR_CAS_RANK5.BANK7 WR_CAS Access to Rank 5; Bank 7
    UNC_M_WR_CAS_RANK5.BANK8 WR_CAS Access to Rank 5; Bank 8
    UNC_M_WR_CAS_RANK5.BANK9 WR_CAS Access to Rank 5; Bank 9
    UNC_M_WR_CAS_RANK5.BANKG0 WR_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)
    UNC_M_WR_CAS_RANK5.BANKG1 WR_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)
    UNC_M_WR_CAS_RANK5.BANKG2 WR_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)
    UNC_M_WR_CAS_RANK5.BANKG3 WR_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)
    UNC_M_WR_CAS_RANK6.ALLBANKS WR_CAS Access to Rank 6; All Banks
    UNC_M_WR_CAS_RANK6.BANK0 WR_CAS Access to Rank 6; Bank 0
    UNC_M_WR_CAS_RANK6.BANK1 WR_CAS Access to Rank 6; Bank 1
    UNC_M_WR_CAS_RANK6.BANK10 WR_CAS Access to Rank 6; Bank 10
    UNC_M_WR_CAS_RANK6.BANK11 WR_CAS Access to Rank 6; Bank 11
    UNC_M_WR_CAS_RANK6.BANK12 WR_CAS Access to Rank 6; Bank 12
    UNC_M_WR_CAS_RANK6.BANK13 WR_CAS Access to Rank 6; Bank 13
    UNC_M_WR_CAS_RANK6.BANK14 WR_CAS Access to Rank 6; Bank 14
    UNC_M_WR_CAS_RANK6.BANK15 WR_CAS Access to Rank 6; Bank 15
    UNC_M_WR_CAS_RANK6.BANK2 WR_CAS Access to Rank 6; Bank 2
    UNC_M_WR_CAS_RANK6.BANK3 WR_CAS Access to Rank 6; Bank 3
    UNC_M_WR_CAS_RANK6.BANK4 WR_CAS Access to Rank 6; Bank 4
    UNC_M_WR_CAS_RANK6.BANK5 WR_CAS Access to Rank 6; Bank 5
    UNC_M_WR_CAS_RANK6.BANK6 WR_CAS Access to Rank 6; Bank 6
    UNC_M_WR_CAS_RANK6.BANK7 WR_CAS Access to Rank 6; Bank 7
    UNC_M_WR_CAS_RANK6.BANK8 WR_CAS Access to Rank 6; Bank 8
    UNC_M_WR_CAS_RANK6.BANK9 WR_CAS Access to Rank 6; Bank 9
    UNC_M_WR_CAS_RANK6.BANKG0 WR_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)
    UNC_M_WR_CAS_RANK6.BANKG1 WR_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)
    UNC_M_WR_CAS_RANK6.BANKG2 WR_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)
    UNC_M_WR_CAS_RANK6.BANKG3 WR_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)
    UNC_M_WR_CAS_RANK7.ALLBANKS WR_CAS Access to Rank 7; All Banks
    UNC_M_WR_CAS_RANK7.BANK0 WR_CAS Access to Rank 7; Bank 0
    UNC_M_WR_CAS_RANK7.BANK1 WR_CAS Access to Rank 7; Bank 1
    UNC_M_WR_CAS_RANK7.BANK10 WR_CAS Access to Rank 7; Bank 10
    UNC_M_WR_CAS_RANK7.BANK11 WR_CAS Access to Rank 7; Bank 11
    UNC_M_WR_CAS_RANK7.BANK12 WR_CAS Access to Rank 7; Bank 12
    UNC_M_WR_CAS_RANK7.BANK13 WR_CAS Access to Rank 7; Bank 13
    UNC_M_WR_CAS_RANK7.BANK14 WR_CAS Access to Rank 7; Bank 14
    UNC_M_WR_CAS_RANK7.BANK15 WR_CAS Access to Rank 7; Bank 15
    UNC_M_WR_CAS_RANK7.BANK2 WR_CAS Access to Rank 7; Bank 2
    UNC_M_WR_CAS_RANK7.BANK3 WR_CAS Access to Rank 7; Bank 3
    UNC_M_WR_CAS_RANK7.BANK4 WR_CAS Access to Rank 7; Bank 4
    UNC_M_WR_CAS_RANK7.BANK5 WR_CAS Access to Rank 7; Bank 5
    UNC_M_WR_CAS_RANK7.BANK6 WR_CAS Access to Rank 7; Bank 6
    UNC_M_WR_CAS_RANK7.BANK7 WR_CAS Access to Rank 7; Bank 7
    UNC_M_WR_CAS_RANK7.BANK8 WR_CAS Access to Rank 7; Bank 8
    UNC_M_WR_CAS_RANK7.BANK9 WR_CAS Access to Rank 7; Bank 9
    UNC_M_WR_CAS_RANK7.BANKG0 WR_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)
    UNC_M_WR_CAS_RANK7.BANKG1 WR_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)
    UNC_M_WR_CAS_RANK7.BANKG2 WR_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)
    UNC_M_WR_CAS_RANK7.BANKG3 WR_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)
    UNC_M_WRONG_MM Not getting the requested Major Mode
    UNC_P_CLOCKTICKS The PCU runs off a fixed 800 MHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.
    UNC_P_CORE0_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE1_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE10_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE11_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE12_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE13_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE14_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE15_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE16_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE17_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE2_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE3_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE4_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE5_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE6_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE7_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE8_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_CORE9_TRANSITION_CYCLES Number of cycles spent performing core C state transitions. There is one event per core.
    UNC_P_DEMOTIONS_CORE0 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE1 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE10 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE11 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE12 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE13 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE14 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE15 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE16 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE17 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE2 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE3 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE4 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE5 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE6 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE7 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE8 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_DEMOTIONS_CORE9 Counts the number of times when a configurable cores had a C-state demotion
    UNC_P_FREQ_BAND0_CYCLES Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.
    UNC_P_FREQ_BAND1_CYCLES Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.
    UNC_P_FREQ_BAND2_CYCLES Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.
    UNC_P_FREQ_BAND3_CYCLES Counts the number of cycles that the uncore was running at a frequency greater than or equal to the frequency that is configured in the filter. One can use all four counters with this event, so it is possible to track up to 4 configurable bands. One can use edge detect in conjunction with this event to track the number of times that we transitioned into a frequency greater than or equal to the configurable frequency. One can also use inversion to track cycles when we were less than the configured frequency.
    UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES Counts the number of cycles when thermal conditions are the upper limit on frequency. This is related to the THERMAL_THROTTLE CYCLES_ABOVE_TEMP event, which always counts cycles when we are above the thermal temperature. This event (STRONGEST_UPPER_LIMIT) is sampled at the output of the algorithm that determines the actual frequency, while THERMAL_THROTTLE looks at the input.
    UNC_P_FREQ_MAX_OS_CYCLES Counts the number of cycles when the OS is the upper limit on frequency.
    UNC_P_FREQ_MAX_POWER_CYCLES Counts the number of cycles when power is the upper limit on frequency.
    UNC_P_FREQ_MIN_IO_P_CYCLES Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth.
    UNC_P_FREQ_TRANS_CYCLES Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system.
    UNC_P_MEMORY_PHASE_SHEDDING_CYCLES Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency.
    UNC_P_PKG_RESIDENCY_C0_CYCLES Counts the number of cycles when the package was in C0. This event can be used in conjunction with edge detect to count C0 entrances (or exits using invert). Residency events do not include transition times.
    UNC_P_PKG_RESIDENCY_C1E_CYCLES Counts the number of cycles when the package was in C1E. This event can be used in conjunction with edge detect to count C1E entrances (or exits using invert). Residency events do not include transition times.
    UNC_P_PKG_RESIDENCY_C2E_CYCLES Counts the number of cycles when the package was in C2E. This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert). Residency events do not include transition times.
    UNC_P_PKG_RESIDENCY_C3_CYCLES Counts the number of cycles when the package was in C3. This event can be used in conjunction with edge detect to count C3 entrances (or exits using invert). Residency events do not include transition times.
    UNC_P_PKG_RESIDENCY_C6_CYCLES Counts the number of cycles when the package was in C6. This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert). Residency events do not include transition times.
    UNC_P_PKG_RESIDENCY_C7_CYCLES Counts the number of cycles when the package was in C7. This event can be used in conjunction with edge detect to count C7 entrances (or exits using invert). Residency events do not include transition times.
    UNC_P_POWER_STATE_OCCUPANCY.CORES_C0 This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.
    UNC_P_POWER_STATE_OCCUPANCY.CORES_C3 This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.
    UNC_P_POWER_STATE_OCCUPANCY.CORES_C6 This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with threshholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.
    UNC_P_PROCHOT_EXTERNAL_CYCLES Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip.
    UNC_P_PROCHOT_INTERNAL_CYCLES Counts the number of cycles that we are in Interal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.
    UNC_P_TOTAL_TRANSITION_CYCLES Number of cycles spent performing core C state transitions across all cores.
    UNC_P_UFS_TRANSITIONS_NO_CHANGE Ring GV with same final and initial frequency
    UNC_P_UFS_TRANSITIONS_RING_GV Ring GV with same final and initial frequency
    UNC_P_VR_HOT_CYCLES VR Hot
    UNC_Q_CLOCKTICKS Counts the number of clocks in the QPI LL. This clock runs at 1/4th the "GT/s" speed of the QPI link. For example, a 4GT/s link will have qfclk or 1GHz. HSX does not support dynamic link speeds, so this frequency is fixed.
    UNC_Q_CTO_COUNT Counts the number of CTO (cluster trigger outs) events that were asserted across the two slots. If both slots trigger in a given cycle, the event will increment by 2. You can use edge detect to count the number of cases when both events triggered.
    UNC_Q_DIRECT2CORE.FAILURE_CREDITS Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because there were not enough Egress credits. Had there been enough credits, the spawn would have worked as the RBT bit was set and the RBT tag matched.
    UNC_Q_DIRECT2CORE.FAILURE_CREDITS_MISS Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the RBT tag did not match and there weren't enough Egress credits. The valid bit was set.
    UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because there were not enough Egress credits AND the RBT bit was not set, but the RBT tag matched.
    UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT_MISS Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the RBT tag did not match, the valid bit was not set and there weren't enough Egress credits.
    UNC_Q_DIRECT2CORE.FAILURE_MISS Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the RBT tag did not match although the valid bit was set and there were enough Egress credits.
    UNC_Q_DIRECT2CORE.FAILURE_RBT_HIT Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the route-back table (RBT) specified that the transaction should not trigger a direct2core tranaction. This is common for IO transactions. There were enough Egress credits and the RBT tag matched but the valid bit was not set.
    UNC_Q_DIRECT2CORE.FAILURE_RBT_MISS Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn failed because the RBT tag did not match and the valid bit was not set although there were enough Egress credits.
    UNC_Q_DIRECT2CORE.SUCCESS_RBT_HIT Counts the number of DRS packets that we attempted to do direct2core on. There are 4 mutually exlusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.; The spawn was successful. There were sufficient credits, the RBT valid bit was set and there was an RBT tag match. The message was marked to spawn direct2core.
    UNC_Q_L1_POWER_CYCLES Number of QPI qfclk cycles spent in L1 power mode. L1 is a mode that totally shuts down a QPI link. Use edge detect to count the number of instances when the QPI link entered L1. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. Because L1 totally shuts down the link, it takes a good amount of time to exit this mode.
    UNC_Q_RxL_BYPASSED Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transfered, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.
    UNC_Q_RxL_CRC_ERRORS.LINK_INIT Number of CRC errors detected in the QPI Agent. Each QPI flit incorporates 8 bits of CRC for error detection. This counts the number of flits where the CRC was able to detect an error. After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it).; CRC errors detected during link initialization.
    UNC_Q_RxL_CRC_ERRORS.NORMAL_OP Number of CRC errors detected in the QPI Agent. Each QPI flit incorporates 8 bits of CRC for error detection. This counts the number of flits where the CRC was able to detect an error. After an error has been detected, the QPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it).; CRC errors detected during normal operation.
    UNC_Q_RxL_CREDITS_CONSUMED_VN0.DRS Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the DRS message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN0.HOM Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the HOM message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCB Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the NCB message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCS Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the NCS message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN0.NDR Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the NDR message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN0.SNP Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN0 credit for the SNP message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN1.DRS Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the DRS message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN1.HOM Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the HOM message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN1.NCB Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the NCB message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN1.NCS Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the NCS message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN1.NDR Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the NDR message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VN1.SNP Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.; VN1 credit for the SNP message class.
    UNC_Q_RxL_CREDITS_CONSUMED_VNA Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.
    UNC_Q_RxL_CYCLES_NE Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.
    UNC_Q_RxL_CYCLES_NE_DRS.VN0 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors DRS flits only.
    UNC_Q_RxL_CYCLES_NE_DRS.VN1 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors DRS flits only.
    UNC_Q_RxL_CYCLES_NE_HOM.VN0 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors HOM flits only.
    UNC_Q_RxL_CYCLES_NE_HOM.VN1 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors HOM flits only.
    UNC_Q_RxL_CYCLES_NE_NCB.VN0 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCB flits only.
    UNC_Q_RxL_CYCLES_NE_NCB.VN1 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCB flits only.
    UNC_Q_RxL_CYCLES_NE_NCS.VN0 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCS flits only.
    UNC_Q_RxL_CYCLES_NE_NCS.VN1 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCS flits only.
    UNC_Q_RxL_CYCLES_NE_NDR.VN0 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NDR flits only.
    UNC_Q_RxL_CYCLES_NE_NDR.VN1 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NDR flits only.
    UNC_Q_RxL_CYCLES_NE_SNP.VN0 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors SNP flits only.
    UNC_Q_RxL_CYCLES_NE_SNP.VN1 Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors SNP flits only.
    UNC_Q_RxL_FLITS_G0.IDLE Counts the number of flits received from the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of flits received over QPI that do not hold protocol payload. When QPI is not in a power saving state, it continuously transmits flits across the link. When there are no protocol flits to send, it will send IDLE and NULL flits across. These flits sometimes do carry a payload, such as credit returns, but are generall not considered part of the QPI bandwidth.
    UNC_Q_RxL_FLITS_G1.DRS Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits received over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits received over the NCB channel which transmits non-coherent data.
    UNC_Q_RxL_FLITS_G1.DRS_DATA Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of data flits received over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits received over the NCB channel which transmits non-coherent data. This includes only the data flits (not the header).
    UNC_Q_RxL_FLITS_G1.DRS_NONDATA Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of protocol flits received over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits received over the NCB channel which transmits non-coherent data. This includes only the header flits (not the data). This includes extended headers.
    UNC_Q_RxL_FLITS_G1.HOM Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of flits received over QPI on the home channel.
    UNC_Q_RxL_FLITS_G1.HOM_NONREQ Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of non-request flits received over QPI on the home channel. These are most commonly snoop responses, and this event can be used as a proxy for that.
    UNC_Q_RxL_FLITS_G1.HOM_REQ Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of data request received over QPI on the home channel. This basically counts the number of remote memory requests received over QPI. In conjunction with the local read count in the Home Agent, one can calculate the number of LLC Misses.
    UNC_Q_RxL_FLITS_G1.SNP Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of snoop request flits received over QPI. These requests are contained in the snoop channel. This does not include snoop responses, which are received on the home channel.
    UNC_Q_RxL_FLITS_G2.NCB Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass flits. These packets are generally used to transmit non-coherent data across QPI.
    UNC_Q_RxL_FLITS_G2.NCB_DATA Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass data flits. These flits are generally used to transmit non-coherent data across QPI. This does not include a count of the DRS (coherent) data flits. This only counts the data flits, not the NCB headers.
    UNC_Q_RxL_FLITS_G2.NCB_NONDATA Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass non-data flits. These packets are generally used to transmit non-coherent data across QPI, and the flits counted here are for headers and other non-data flits. This includes extended headers.
    UNC_Q_RxL_FLITS_G2.NCS Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Number of NCS (non-coherent standard) flits received over QPI. This includes extended headers.
    UNC_Q_RxL_FLITS_G2.NDR_AD Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits received over the NDR (Non-Data Response) channel. This channel is used to send a variety of protocol flits including grants and completions. This is only for NDR packets to the local socket which use the AK ring.
    UNC_Q_RxL_FLITS_G2.NDR_AK Counts the number of flits received from the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits received over the NDR (Non-Data Response) channel. This channel is used to send a variety of protocol flits including grants and completions. This is only for NDR packets destined for Route-thru to a remote socket.
    UNC_Q_RxL_INSERTS Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.
    UNC_Q_RxL_INSERTS_DRS.VN0 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only DRS flits.
    UNC_Q_RxL_INSERTS_DRS.VN1 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only DRS flits.
    UNC_Q_RxL_INSERTS_HOM.VN0 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only HOM flits.
    UNC_Q_RxL_INSERTS_HOM.VN1 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only HOM flits.
    UNC_Q_RxL_INSERTS_NCB.VN0 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCB flits.
    UNC_Q_RxL_INSERTS_NCB.VN1 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCB flits.
    UNC_Q_RxL_INSERTS_NCS.VN0 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCS flits.
    UNC_Q_RxL_INSERTS_NCS.VN1 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NCS flits.
    UNC_Q_RxL_INSERTS_NDR.VN0 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NDR flits.
    UNC_Q_RxL_INSERTS_NDR.VN1 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only NDR flits.
    UNC_Q_RxL_INSERTS_SNP.VN0 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only SNP flits.
    UNC_Q_RxL_INSERTS_SNP.VN1 Number of allocations into the QPI Rx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime. This monitors only SNP flits.
    UNC_Q_RxL_OCCUPANCY Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.
    UNC_Q_RxL_OCCUPANCY_DRS.VN0 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors DRS flits only.
    UNC_Q_RxL_OCCUPANCY_DRS.VN1 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors DRS flits only.
    UNC_Q_RxL_OCCUPANCY_HOM.VN0 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors HOM flits only.
    UNC_Q_RxL_OCCUPANCY_HOM.VN1 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors HOM flits only.
    UNC_Q_RxL_OCCUPANCY_NCB.VN0 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCB flits only.
    UNC_Q_RxL_OCCUPANCY_NCB.VN1 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCB flits only.
    UNC_Q_RxL_OCCUPANCY_NCS.VN0 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCS flits only.
    UNC_Q_RxL_OCCUPANCY_NCS.VN1 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NCS flits only.
    UNC_Q_RxL_OCCUPANCY_NDR.VN0 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NDR flits only.
    UNC_Q_RxL_OCCUPANCY_NDR.VN1 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors NDR flits only.
    UNC_Q_RxL_OCCUPANCY_SNP.VN0 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors SNP flits only.
    UNC_Q_RxL_OCCUPANCY_SNP.VN1 Accumulates the number of elements in the QPI RxQ in each cycle. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime. This monitors SNP flits only.
    UNC_Q_RxL_STALLS_VN0.BGF_DRS Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the HOM message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN0.BGF_HOM Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the DRS message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN0.BGF_NCB Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the SNP message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN0.BGF_NCS Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the NDR message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN0.BGF_NDR Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the NCS message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN0.BGF_SNP Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet from the NCB message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN0.EGRESS_CREDITS Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled a packet because there were insufficient BGF credits. For details on a message class granularity, use the Egress Credit Occupancy events.
    UNC_Q_RxL_STALLS_VN0.GV Number of stalls trying to send to R3QPI on Virtual Network 0; Stalled because a GV transition (frequency transition) was taking place.
    UNC_Q_RxL_STALLS_VN1.BGF_DRS Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the HOM message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN1.BGF_HOM Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the DRS message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN1.BGF_NCB Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the SNP message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN1.BGF_NCS Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the NDR message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN1.BGF_NDR Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the NCS message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL_STALLS_VN1.BGF_SNP Number of stalls trying to send to R3QPI on Virtual Network 1.; Stalled a packet from the NCB message class because there were not enough BGF credits. In bypass mode, we will stall on the packet boundary, while in RxQ mode we will stall on the flit boundary.
    UNC_Q_RxL0_POWER_CYCLES Number of QPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.
    UNC_Q_RxL0P_POWER_CYCLES Number of QPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses. Use edge detect to count the number of instances when the QPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.
    UNC_Q_TxL_BYPASSED Counts the number of times that an incoming flit was able to bypass the Tx flit buffer and pass directly out the QPI Link. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.
    UNC_Q_TxL_CRC_NO_CREDITS.ALMOST_FULL Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stall.; When LLR is almost full, we block some but not all packets.
    UNC_Q_TxL_CRC_NO_CREDITS.FULL Number of cycles when the Tx side ran out of Link Layer Retry credits, causing the Tx to stall.; When LLR is totally full, we are not allowed to send any packets.
    UNC_Q_TxL_CYCLES_NE Counts the number of cycles when the TxQ is not empty. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.
    UNC_Q_TxL_FLITS_G0.DATA Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits transmitted over QPI. Each flit contains 64b of data. This includes both DRS and NCB data flits (coherent and non-coherent). This can be used to calculate the data bandwidth of the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This does not include the header flits that go in data packets.
    UNC_Q_TxL_FLITS_G0.NON_DATA Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-data flits transmitted across QPI. This basically tracks the protocol overhead on the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This includes the header flits for data packets.
    UNC_Q_TxL_FLITS_G1.DRS Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency.
    UNC_Q_TxL_FLITS_G1.DRS_DATA Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of data flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits transmitted over the NCB channel which transmits non-coherent data. This includes only the data flits (not the header).
    UNC_Q_TxL_FLITS_G1.DRS_NONDATA Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of protocol flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits transmitted over the NCB channel which transmits non-coherent data. This includes only the header flits (not the data). This includes extended headers.
    UNC_Q_TxL_FLITS_G1.HOM Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of flits transmitted over QPI on the home channel.
    UNC_Q_TxL_FLITS_G1.HOM_NONREQ Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of non-request flits transmitted over QPI on the home channel. These are most commonly snoop responses, and this event can be used as a proxy for that.
    UNC_Q_TxL_FLITS_G1.HOM_REQ Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of data request transmitted over QPI on the home channel. This basically counts the number of remote memory requests transmitted over QPI. In conjunction with the local read count in the Home Agent, one can calculate the number of LLC Misses.
    UNC_Q_TxL_FLITS_G1.SNP Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of snoop request flits transmitted over QPI. These requests are contained in the snoop channel. This does not include snoop responses, which are transmitted on the home channel.
    UNC_Q_TxL_FLITS_G2.NCB Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass flits. These packets are generally used to transmit non-coherent data across QPI.
    UNC_Q_TxL_FLITS_G2.NCB_DATA Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass data flits. These flits are generally used to transmit non-coherent data across QPI. This does not include a count of the DRS (coherent) data flits. This only counts the data flits, not te NCB headers.
    UNC_Q_TxL_FLITS_G2.NCB_NONDATA Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Number of Non-Coherent Bypass non-data flits. These packets are generally used to transmit non-coherent data across QPI, and the flits counted here are for headers and other non-data flits. This includes extended headers.
    UNC_Q_TxL_FLITS_G2.NCS Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Number of NCS (non-coherent standard) flits transmitted over QPI. This includes extended headers.
    UNC_Q_TxL_FLITS_G2.NDR_AD Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits transmitted over the NDR (Non-Data Response) channel. This channel is used to send a variety of protocol flits including grants and completions. This is only for NDR packets to the local socket which use the AK ring.
    UNC_Q_TxL_FLITS_G2.NDR_AK Counts the number of flits trasmitted across the QPI Link. This is one of three "groups" that allow us to track flits. It includes filters for NDR, NCB, and NCS message classes. Each "flit" is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four "fits", each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI "speed" (for example, 8.0 GT/s), the "transfers" here refer to "fits". Therefore, in L0, the system will transfer 1 "flit" at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as "data" bandwidth. For example, when we are transfering a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual "data" and an additional 16 bits of other information. To calculate "data" bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits transmitted over the NDR (Non-Data Response) channel. This channel is used to send a variety of protocol flits including grants and completions. This is only for NDR packets destined for Route-thru to a remote socket.
    UNC_Q_TxL_INSERTS Number of allocations into the QPI Tx Flit Buffer. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.
    UNC_Q_TxL_OCCUPANCY Accumulates the number of flits in the TxQ. Generally, when data is transmitted across QPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQ.
    UNC_Q_TxL0_POWER_CYCLES Number of QPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.
    UNC_Q_TxL0P_POWER_CYCLES Number of QPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the QPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize QPI for snoops and their responses. Use edge detect to count the number of instances when the QPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.
    UNC_Q_TxR_AD_HOM_CREDIT_ACQUIRED.VN0 Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle. Flow Control FIFO for Home messages on AD.
    UNC_Q_TxR_AD_HOM_CREDIT_ACQUIRED.VN1 Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle. Flow Control FIFO for Home messages on AD.
    UNC_Q_TxR_AD_HOM_CREDIT_OCCUPANCY.VN0 Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle. Flow Control FIFO for HOM messages on AD.
    UNC_Q_TxR_AD_HOM_CREDIT_OCCUPANCY.VN1 Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle. Flow Control FIFO for HOM messages on AD.
    UNC_Q_TxR_AD_NDR_CREDIT_ACQUIRED.VN0 Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle. Flow Control FIFO for NDR messages on AD.
    UNC_Q_TxR_AD_NDR_CREDIT_ACQUIRED.VN1 Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle. Flow Control FIFO for NDR messages on AD.
    UNC_Q_TxR_AD_NDR_CREDIT_OCCUPANCY.VN0 Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle. Flow Control FIFO for NDR messages on AD.
    UNC_Q_TxR_AD_NDR_CREDIT_OCCUPANCY.VN1 Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle. Flow Control FIFO for NDR messages on AD.
    UNC_Q_TxR_AD_SNP_CREDIT_ACQUIRED.VN0 Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle. Flow Control FIFO for Snoop messages on AD.
    UNC_Q_TxR_AD_SNP_CREDIT_ACQUIRED.VN1 Number of link layer credits into the R3 (for transactions across the BGF) acquired each cycle. Flow Control FIFO for Snoop messages on AD.
    UNC_Q_TxR_AD_SNP_CREDIT_OCCUPANCY.VN0 Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle. Flow Control FIFO fro Snoop messages on AD.
    UNC_Q_TxR_AD_SNP_CREDIT_OCCUPANCY.VN1 Occupancy event that tracks the number of link layer credits into the R3 (for transactions across the BGF) available in each cycle. Flow Control FIFO fro Snoop messages on AD.
    UNC_Q_TxR_AK_NDR_CREDIT_ACQUIRED Number of credits into the R3 (for transactions across the BGF) acquired each cycle. Local NDR message class to AK Egress.
    UNC_Q_TxR_AK_NDR_CREDIT_OCCUPANCY Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle. Local NDR message class to AK Egress.
    UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN_SHR Number of credits into the R3 (for transactions across the BGF) acquired each cycle. DRS message class to BL Egress.
    UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN0 Number of credits into the R3 (for transactions across the BGF) acquired each cycle. DRS message class to BL Egress.
    UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN1 Number of credits into the R3 (for transactions across the BGF) acquired each cycle. DRS message class to BL Egress.
    UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN_SHR Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle. DRS message class to BL Egress.
    UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN0 Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle. DRS message class to BL Egress.
    UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN1 Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle. DRS message class to BL Egress.
    UNC_Q_TxR_BL_NCB_CREDIT_ACQUIRED.VN0 Number of credits into the R3 (for transactions across the BGF) acquired each cycle. NCB message class to BL Egress.
    UNC_Q_TxR_BL_NCB_CREDIT_ACQUIRED.VN1 Number of credits into the R3 (for transactions across the BGF) acquired each cycle. NCB message class to BL Egress.
    UNC_Q_TxR_BL_NCB_CREDIT_OCCUPANCY.VN0 Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle. NCB message class to BL Egress.
    UNC_Q_TxR_BL_NCB_CREDIT_OCCUPANCY.VN1 Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle. NCB message class to BL Egress.
    UNC_Q_TxR_BL_NCS_CREDIT_ACQUIRED.VN0 Number of credits into the R3 (for transactions across the BGF) acquired each cycle. NCS message class to BL Egress.
    UNC_Q_TxR_BL_NCS_CREDIT_ACQUIRED.VN1 Number of credits into the R3 (for transactions across the BGF) acquired each cycle. NCS message class to BL Egress.
    UNC_Q_TxR_BL_NCS_CREDIT_OCCUPANCY.VN0 Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle. NCS message class to BL Egress.
    UNC_Q_TxR_BL_NCS_CREDIT_OCCUPANCY.VN1 Occupancy event that tracks the number of credits into the R3 (for transactions across the BGF) available in each cycle. NCS message class to BL Egress.
    UNC_Q_VNA_CREDIT_RETURN_OCCUPANCY Number of VNA credits in the Rx side that are waitng to be returned back across the link.
    UNC_Q_VNA_CREDIT_RETURNS Number of VNA credits returned.
    UNC_R2_CLOCKTICKS Counts the number of uclks in the R2PCIe uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the R2PCIe is close to the Ubox, they generally should not diverge by more than a handful of cycles.
    UNC_R2_IIO_CREDIT.ISOCH_QPI0
    UNC_R2_IIO_CREDIT.ISOCH_QPI1
    UNC_R2_IIO_CREDIT.PRQ_QPI0
    UNC_R2_IIO_CREDIT.PRQ_QPI1
    UNC_R2_IIO_CREDITS_ACQUIRED.DRS Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the DRS message class.
    UNC_R2_IIO_CREDITS_ACQUIRED.NCB Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the NCB message class.
    UNC_R2_IIO_CREDITS_ACQUIRED.NCS Counts the number of credits that are acquired in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the NCS message class.
    UNC_R2_IIO_CREDITS_USED.DRS Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the DRS message class.
    UNC_R2_IIO_CREDITS_USED.NCB Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the NCB message class.
    UNC_R2_IIO_CREDITS_USED.NCS Counts the number of cycles when one or more credits in the R2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly).; Credits to the IIO for the NCS message class.
    UNC_R2_RING_AD_USED.CCW Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R2_RING_AD_USED.CCW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_R2_RING_AD_USED.CCW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_R2_RING_AD_USED.CW Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R2_RING_AD_USED.CW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_R2_RING_AD_USED.CW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_R2_RING_AK_BOUNCES.DN Counts the number of times when a request destined for the AK ingress bounced.
    UNC_R2_RING_AK_BOUNCES.UP Counts the number of times when a request destined for the AK ingress bounced.
    UNC_R2_RING_AK_USED.CCW Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R2_RING_AK_USED.CCW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_R2_RING_AK_USED.CCW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_R2_RING_AK_USED.CW Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R2_RING_AK_USED.CW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_R2_RING_AK_USED.CW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_R2_RING_BL_USED.CCW Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R2_RING_BL_USED.CCW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_R2_RING_BL_USED.CCW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_R2_RING_BL_USED.CW Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R2_RING_BL_USED.CW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_R2_RING_BL_USED.CW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_R2_RING_IV_USED.ANY Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.
    UNC_R2_RING_IV_USED.CCW Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.
    UNC_R2_RING_IV_USED.CW Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.
    UNC_R2_RxR_CYCLES_NE.NCB Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queue
    UNC_R2_RxR_CYCLES_NE.NCS Counts the number of cycles when the R2PCIe Ingress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queue
    UNC_R2_RxR_INSERTS.NCB Counts the number of allocations into the R2PCIe Ingress. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queue
    UNC_R2_RxR_INSERTS.NCS Counts the number of allocations into the R2PCIe Ingress. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queue
    UNC_R2_RxR_OCCUPANCY.DRS Accumulates the occupancy of a given R2PCIe Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the R2PCIe Ingress Not Empty event to calculate average occupancy or the R2PCIe Ingress Allocations event in order to calculate average queuing latency.; DRS Ingress Queue
    UNC_R2_SBO0_CREDIT_OCCUPANCY.AD Number of Sbo 0 credits in use in a given cycle, per ring.
    UNC_R2_SBO0_CREDIT_OCCUPANCY.BL Number of Sbo 0 credits in use in a given cycle, per ring.
    UNC_R2_SBO0_CREDITS_ACQUIRED.AD Number of Sbo 0 credits acquired in a given cycle, per ring.
    UNC_R2_SBO0_CREDITS_ACQUIRED.BL Number of Sbo 0 credits acquired in a given cycle, per ring.
    UNC_R2_STALL_NO_SBO_CREDIT.SBO0_AD Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_R2_STALL_NO_SBO_CREDIT.SBO0_BL Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_R2_STALL_NO_SBO_CREDIT.SBO1_AD Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_R2_STALL_NO_SBO_CREDIT.SBO1_BL Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_R2_TxR_CYCLES_FULL.AD Counts the number of cycles when the R2PCIe Egress buffer is full.; AD Egress Queue
    UNC_R2_TxR_CYCLES_FULL.AK Counts the number of cycles when the R2PCIe Egress buffer is full.; AK Egress Queue
    UNC_R2_TxR_CYCLES_FULL.BL Counts the number of cycles when the R2PCIe Egress buffer is full.; BL Egress Queue
    UNC_R2_TxR_CYCLES_NE.AD Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.; AD Egress Queue
    UNC_R2_TxR_CYCLES_NE.AK Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.; AK Egress Queue
    UNC_R2_TxR_CYCLES_NE.BL Counts the number of cycles when the R2PCIe Egress is not empty. This tracks one of the three rings that are used by the R2PCIe agent. This can be used in conjunction with the R2PCIe Egress Occupancy Accumulator event in order to calculate average queue occupancy. Only a single Egress queue can be tracked at any given time. It is not possible to filter based on direction or polarity.; BL Egress Queue
    UNC_R2_TxR_NACK_CW.DN_AD AD CounterClockwise Egress Queue
    UNC_R2_TxR_NACK_CW.DN_AK AK CounterClockwise Egress Queue
    UNC_R2_TxR_NACK_CW.DN_BL BL CounterClockwise Egress Queue
    UNC_R2_TxR_NACK_CW.UP_AD BL CounterClockwise Egress Queue
    UNC_R2_TxR_NACK_CW.UP_AK AD Clockwise Egress Queue
    UNC_R2_TxR_NACK_CW.UP_BL AD CounterClockwise Egress Queue
    UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO_15_17 No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 15&17
    UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO10 No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 10
    UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO11 No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 11
    UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO12 No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 12
    UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO13 No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 13
    UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO14_16 No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 14&16
    UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO8 No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 8
    UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO9 No credits available to send to Cbox on the AD Ring (covers higher CBoxes); Cbox 9
    UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO0 No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 0
    UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO1 No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 1
    UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO2 No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 2
    UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO3 No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 3
    UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO4 No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 4
    UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO5 No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 5
    UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO6 No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 6
    UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO7 No credits available to send to Cbox on the AD Ring (covers lower CBoxes); Cbox 7
    UNC_R3_CLOCKTICKS Counts the number of uclks in the QPI uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the QPI Agent is close to the Ubox, they generally should not diverge by more than a handful of cycles.
    UNC_R3_HA_R2_BL_CREDITS_EMPTY.HA0 No credits available to send to either HA or R2 on the BL Ring; HA0
    UNC_R3_HA_R2_BL_CREDITS_EMPTY.HA1 No credits available to send to either HA or R2 on the BL Ring; HA1
    UNC_R3_HA_R2_BL_CREDITS_EMPTY.R2_NCB No credits available to send to either HA or R2 on the BL Ring; R2 NCB Messages
    UNC_R3_HA_R2_BL_CREDITS_EMPTY.R2_NCS No credits available to send to either HA or R2 on the BL Ring; R2 NCS Messages
    UNC_R3_IOT_BACKPRESSURE.HUB IOT Backpressure
    UNC_R3_IOT_BACKPRESSURE.SAT IOT Backpressure
    UNC_R3_IOT_CTS_HI.CTS2 Debug Mask/Match Tie-Ins
    UNC_R3_IOT_CTS_HI.CTS3 Debug Mask/Match Tie-Ins
    UNC_R3_IOT_CTS_LO.CTS0 Debug Mask/Match Tie-Ins
    UNC_R3_IOT_CTS_LO.CTS1 Debug Mask/Match Tie-Ins
    UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_HOM No credits available to send to QPI0 on the AD Ring; VN0 HOM Messages
    UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_NDR No credits available to send to QPI0 on the AD Ring; VN0 NDR Messages
    UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_SNP No credits available to send to QPI0 on the AD Ring; VN0 SNP Messages
    UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_HOM No credits available to send to QPI0 on the AD Ring; VN1 HOM Messages
    UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_NDR No credits available to send to QPI0 on the AD Ring; VN1 NDR Messages
    UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_SNP No credits available to send to QPI0 on the AD Ring; VN1 SNP Messages
    UNC_R3_QPI0_AD_CREDITS_EMPTY.VNA No credits available to send to QPI0 on the AD Ring; VNA
    UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_HOM No credits available to send to QPI0 on the BL Ring; VN1 HOM Messages
    UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_NDR No credits available to send to QPI0 on the BL Ring; VN1 NDR Messages
    UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_SNP No credits available to send to QPI0 on the BL Ring; VN1 SNP Messages
    UNC_R3_QPI0_BL_CREDITS_EMPTY.VNA No credits available to send to QPI0 on the BL Ring; VNA
    UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_HOM No credits available to send to QPI1 on the AD Ring; VN1 HOM Messages
    UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_NDR No credits available to send to QPI1 on the AD Ring; VN1 NDR Messages
    UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_SNP No credits available to send to QPI1 on the AD Ring; VN1 SNP Messages
    UNC_R3_QPI1_AD_CREDITS_EMPTY.VNA No credits available to send to QPI1 on the AD Ring; VNA
    UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_HOM No credits available to send to QPI1 on the BL Ring; VN0 HOM Messages
    UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_NDR No credits available to send to QPI1 on the BL Ring; VN0 NDR Messages
    UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_SNP No credits available to send to QPI1 on the BL Ring; VN0 SNP Messages
    UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_HOM No credits available to send to QPI1 on the BL Ring; VN1 HOM Messages
    UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_NDR No credits available to send to QPI1 on the BL Ring; VN1 NDR Messages
    UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_SNP No credits available to send to QPI1 on the BL Ring; VN1 SNP Messages
    UNC_R3_QPI1_BL_CREDITS_EMPTY.VNA No credits available to send to QPI1 on the BL Ring; VNA
    UNC_R3_RING_AD_USED.CCW Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R3_RING_AD_USED.CCW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_R3_RING_AD_USED.CCW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_R3_RING_AD_USED.CW Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R3_RING_AD_USED.CW_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_R3_RING_AD_USED.CW_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_R3_RING_AK_USED.CCW Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R3_RING_AK_USED.CCW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_R3_RING_AK_USED.CCW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_R3_RING_AK_USED.CW Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R3_RING_AK_USED.CW_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_R3_RING_AK_USED.CW_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_R3_RING_BL_USED.CCW Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R3_RING_BL_USED.CCW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Even ring polarity.
    UNC_R3_RING_BL_USED.CCW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Counterclockwise and Odd ring polarity.
    UNC_R3_RING_BL_USED.CW Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.
    UNC_R3_RING_BL_USED.CW_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Even ring polarity.
    UNC_R3_RING_BL_USED.CW_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.; Filters for the Clockwise and Odd ring polarity.
    UNC_R3_RING_IV_USED.ANY Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.
    UNC_R3_RING_IV_USED.CW Counts the number of cycles that the IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop.
    UNC_R3_RING_SINK_STARVED.AK Number of cycles the ringstop is in starvation (per ring)
    UNC_R3_RxR_CYCLES_NE.HOM Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; HOM Ingress Queue
    UNC_R3_RxR_CYCLES_NE.NDR Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; NDR Ingress Queue
    UNC_R3_RxR_CYCLES_NE.SNP Counts the number of cycles when the QPI Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; SNP Ingress Queue
    UNC_R3_RxR_CYCLES_NE_VN1.DRS Counts the number of cycles when the QPI VN1 Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; DRS Ingress Queue
    UNC_R3_RxR_CYCLES_NE_VN1.HOM Counts the number of cycles when the QPI VN1 Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; HOM Ingress Queue
    UNC_R3_RxR_CYCLES_NE_VN1.NCB Counts the number of cycles when the QPI VN1 Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queue
    UNC_R3_RxR_CYCLES_NE_VN1.NCS Counts the number of cycles when the QPI VN1 Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queue
    UNC_R3_RxR_CYCLES_NE_VN1.NDR Counts the number of cycles when the QPI VN1 Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; NDR Ingress Queue
    UNC_R3_RxR_CYCLES_NE_VN1.SNP Counts the number of cycles when the QPI VN1 Ingress is not empty. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; SNP Ingress Queue
    UNC_R3_RxR_INSERTS.DRS Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; DRS Ingress Queue
    UNC_R3_RxR_INSERTS.HOM Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; HOM Ingress Queue
    UNC_R3_RxR_INSERTS.NCB Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queue
    UNC_R3_RxR_INSERTS.NCS Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queue
    UNC_R3_RxR_INSERTS.NDR Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; NDR Ingress Queue
    UNC_R3_RxR_INSERTS.SNP Counts the number of allocations into the QPI Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; SNP Ingress Queue
    UNC_R3_RxR_INSERTS_VN1.DRS Counts the number of allocations into the QPI VN1 Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; DRS Ingress Queue
    UNC_R3_RxR_INSERTS_VN1.HOM Counts the number of allocations into the QPI VN1 Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; HOM Ingress Queue
    UNC_R3_RxR_INSERTS_VN1.NCB Counts the number of allocations into the QPI VN1 Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCB Ingress Queue
    UNC_R3_RxR_INSERTS_VN1.NCS Counts the number of allocations into the QPI VN1 Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; NCS Ingress Queue
    UNC_R3_RxR_INSERTS_VN1.NDR Counts the number of allocations into the QPI VN1 Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; NDR Ingress Queue
    UNC_R3_RxR_INSERTS_VN1.SNP Counts the number of allocations into the QPI VN1 Ingress. This tracks one of the three rings that are used by the QPI agent. This can be used in conjunction with the QPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; SNP Ingress Queue
    UNC_R3_RxR_OCCUPANCY_VN1.DRS Accumulates the occupancy of a given QPI VN1 Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI VN1 Ingress Not Empty event to calculate average occupancy or the QPI VN1 Ingress Allocations event in order to calculate average queuing latency.; DRS Ingress Queue
    UNC_R3_RxR_OCCUPANCY_VN1.HOM Accumulates the occupancy of a given QPI VN1 Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI VN1 Ingress Not Empty event to calculate average occupancy or the QPI VN1 Ingress Allocations event in order to calculate average queuing latency.; HOM Ingress Queue
    UNC_R3_RxR_OCCUPANCY_VN1.NCB Accumulates the occupancy of a given QPI VN1 Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI VN1 Ingress Not Empty event to calculate average occupancy or the QPI VN1 Ingress Allocations event in order to calculate average queuing latency.; NCB Ingress Queue
    UNC_R3_RxR_OCCUPANCY_VN1.NCS Accumulates the occupancy of a given QPI VN1 Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI VN1 Ingress Not Empty event to calculate average occupancy or the QPI VN1 Ingress Allocations event in order to calculate average queuing latency.; NCS Ingress Queue
    UNC_R3_RxR_OCCUPANCY_VN1.NDR Accumulates the occupancy of a given QPI VN1 Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI VN1 Ingress Not Empty event to calculate average occupancy or the QPI VN1 Ingress Allocations event in order to calculate average queuing latency.; NDR Ingress Queue
    UNC_R3_RxR_OCCUPANCY_VN1.SNP Accumulates the occupancy of a given QPI VN1 Ingress queue in each cycles. This tracks one of the three ring Ingress buffers. This can be used with the QPI VN1 Ingress Not Empty event to calculate average occupancy or the QPI VN1 Ingress Allocations event in order to calculate average queuing latency.; SNP Ingress Queue
    UNC_R3_SBO0_CREDIT_OCCUPANCY.AD Number of Sbo 0 credits in use in a given cycle, per ring.
    UNC_R3_SBO0_CREDIT_OCCUPANCY.BL Number of Sbo 0 credits in use in a given cycle, per ring.
    UNC_R3_SBO0_CREDITS_ACQUIRED.AD Number of Sbo 0 credits acquired in a given cycle, per ring.
    UNC_R3_SBO0_CREDITS_ACQUIRED.BL Number of Sbo 0 credits acquired in a given cycle, per ring.
    UNC_R3_SBO1_CREDIT_OCCUPANCY.AD Number of Sbo 1 credits in use in a given cycle, per ring.
    UNC_R3_SBO1_CREDIT_OCCUPANCY.BL Number of Sbo 1 credits in use in a given cycle, per ring.
    UNC_R3_SBO1_CREDITS_ACQUIRED.AD Number of Sbo 1 credits acquired in a given cycle, per ring.
    UNC_R3_SBO1_CREDITS_ACQUIRED.BL Number of Sbo 1 credits acquired in a given cycle, per ring.
    UNC_R3_STALL_NO_SBO_CREDIT.SBO0_AD Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_R3_STALL_NO_SBO_CREDIT.SBO0_BL Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_R3_STALL_NO_SBO_CREDIT.SBO1_AD Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_R3_STALL_NO_SBO_CREDIT.SBO1_BL Number of cycles Egress is stalled waiting for an Sbo credit to become available. Per Sbo, per Ring.
    UNC_R3_TxR_NACK.DN_AD AD CounterClockwise Egress Queue
    UNC_R3_TxR_NACK.DN_AK AK CounterClockwise Egress Queue
    UNC_R3_TxR_NACK.DN_BL BL CounterClockwise Egress Queue
    UNC_R3_TxR_NACK.UP_AD BL CounterClockwise Egress Queue
    UNC_R3_TxR_NACK.UP_AK AD Clockwise Egress Queue
    UNC_R3_TxR_NACK.UP_BL AD CounterClockwise Egress Queue
    UNC_R3_VN0_CREDITS_REJECT.DRS Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.; Filter for Data Response (DRS). DRS is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using DRS.
    UNC_R3_VN0_CREDITS_REJECT.HOM Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.; Filter for the Home (HOM) message class. HOM is generally used to send requests, request responses, and snoop responses.
    UNC_R3_VN0_CREDITS_REJECT.NCB Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.; Filter for Non-Coherent Broadcast (NCB). NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.
    UNC_R3_VN0_CREDITS_REJECT.NCS Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.; Filter for Non-Coherent Standard (NCS). NCS is commonly used for ?
    UNC_R3_VN0_CREDITS_REJECT.NDR Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP).
    UNC_R3_VN0_CREDITS_REJECT.SNP Number of times a request failed to acquire a DRS VN0 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN0 credit and is delayed. This should generally be a rare situation.; Filter for Snoop (SNP) message class. SNP is used for outgoing snoops. Note that snoop responses flow on the HOM message class.
    UNC_R3_VN0_CREDITS_USED.DRS Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for Data Response (DRS). DRS is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using DRS.
    UNC_R3_VN0_CREDITS_USED.HOM Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for the Home (HOM) message class. HOM is generally used to send requests, request responses, and snoop responses.
    UNC_R3_VN0_CREDITS_USED.NCB Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for Non-Coherent Broadcast (NCB). NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.
    UNC_R3_VN0_CREDITS_USED.NCS Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for Non-Coherent Standard (NCS). NCS is commonly used for ?
    UNC_R3_VN0_CREDITS_USED.NDR Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP).
    UNC_R3_VN0_CREDITS_USED.SNP Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Filter for Snoop (SNP) message class. SNP is used for outgoing snoops. Note that snoop responses flow on the HOM message class.
    UNC_R3_VN1_CREDITS_REJECT.DRS Number of times a request failed to acquire a VN1 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed. This should generally be a rare situation.; Filter for Data Response (DRS). DRS is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using DRS.
    UNC_R3_VN1_CREDITS_REJECT.HOM Number of times a request failed to acquire a VN1 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed. This should generally be a rare situation.; Filter for the Home (HOM) message class. HOM is generally used to send requests, request responses, and snoop responses.
    UNC_R3_VN1_CREDITS_REJECT.NCB Number of times a request failed to acquire a VN1 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed. This should generally be a rare situation.; Filter for Non-Coherent Broadcast (NCB). NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.
    UNC_R3_VN1_CREDITS_REJECT.NCS Number of times a request failed to acquire a VN1 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed. This should generally be a rare situation.; Filter for Non-Coherent Standard (NCS). NCS is commonly used for ?
    UNC_R3_VN1_CREDITS_REJECT.NDR Number of times a request failed to acquire a VN1 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed. This should generally be a rare situation.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP).
    UNC_R3_VN1_CREDITS_REJECT.SNP Number of times a request failed to acquire a VN1 credit. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This therefore counts the number of times when a request failed to acquire either a VNA or VN1 credit and is delayed. This should generally be a rare situation.; Filter for Snoop (SNP) message class. SNP is used for outgoing snoops. Note that snoop responses flow on the HOM message class.
    UNC_R3_VN1_CREDITS_USED.DRS Number of times a VN1 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for Data Response (DRS). DRS is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using DRS.
    UNC_R3_VN1_CREDITS_USED.HOM Number of times a VN1 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for the Home (HOM) message class. HOM is generally used to send requests, request responses, and snoop responses.
    UNC_R3_VN1_CREDITS_USED.NCB Number of times a VN1 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for Non-Coherent Broadcast (NCB). NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.
    UNC_R3_VN1_CREDITS_USED.NCS Number of times a VN1 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for Non-Coherent Standard (NCS). NCS is commonly used for ?
    UNC_R3_VN1_CREDITS_USED.NDR Number of times a VN1 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP).
    UNC_R3_VN1_CREDITS_USED.SNP Number of times a VN1 credit was used on the DRS message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Filter for Snoop (SNP) message class. SNP is used for outgoing snoops. Note that snoop responses flow on the HOM message class.
    UNC_R3_VNA_CREDITS_ACQUIRED.AD Number of QPI VNA Credit acquisitions. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder. VNA credits are used by all message classes in order to communicate across QPI. If a packet is unable to acquire credits, it will then attempt to use credts from the VN0 pool. Note that a single packet may require multiple flit buffers (i.e. when data is being transfered). Therefore, this event will increment by the number of credits acquired in each cycle. Filtering based on message class is not provided. One can count the number of packets transfered in a given message class using an qfclk event.; Filter for the Home (HOM) message class. HOM is generally used to send requests, request responses, and snoop responses.
    UNC_R3_VNA_CREDITS_ACQUIRED.BL Number of QPI VNA Credit acquisitions. This event can be used in conjunction with the VNA In-Use Accumulator to calculate the average lifetime of a credit holder. VNA credits are used by all message classes in order to communicate across QPI. If a packet is unable to acquire credits, it will then attempt to use credts from the VN0 pool. Note that a single packet may require multiple flit buffers (i.e. when data is being transfered). Therefore, this event will increment by the number of credits acquired in each cycle. Filtering based on message class is not provided. One can count the number of packets transfered in a given message class using an qfclk event.; Filter for the Home (HOM) message class. HOM is generally used to send requests, request responses, and snoop responses.
    UNC_R3_VNA_CREDITS_REJECT.DRS Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for Data Response (DRS). DRS is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using DRS.
    UNC_R3_VNA_CREDITS_REJECT.HOM Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for the Home (HOM) message class. HOM is generally used to send requests, request responses, and snoop responses.
    UNC_R3_VNA_CREDITS_REJECT.NCB Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for Non-Coherent Broadcast (NCB). NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.
    UNC_R3_VNA_CREDITS_REJECT.NCS Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for Non-Coherent Standard (NCS).
    UNC_R3_VNA_CREDITS_REJECT.NDR Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.; NDR packets are used to transmit a variety of protocol flits including grants and completions (CMP).
    UNC_R3_VNA_CREDITS_REJECT.SNP Number of attempted VNA credit acquisitions that were rejected because the VNA credit pool was full (or almost full). It is possible to filter this event by message class. Some packets use more than one flit buffer, and therefore must acquire multiple credits. Therefore, one could get a reject even if the VNA credits were not fully used up. The VNA pool is generally used to provide the bulk of the QPI bandwidth (as opposed to the VN0 pool which is used to guarantee forward progress). VNA credits can run out if the flit buffer on the receiving side starts to queue up substantially. This can happen if the rest of the uncore is unable to drain the requests fast enough.; Filter for Snoop (SNP) message class. SNP is used for outgoing snoops. Note that snoop responses flow on the HOM message class.
    UNC_S_BOUNCE_CONTROL Bounce Control
    UNC_S_CLOCKTICKS Uncore Clocks
    UNC_S_FAST_ASSERTED Counts the number of cycles either the local or incoming distress signals are asserted. Incoming distress includes up, dn and across.
    UNC_S_RING_AD_USED.DOWN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_S_RING_AD_USED.DOWN_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarity.
    UNC_S_RING_AD_USED.DOWN_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity.
    UNC_S_RING_AD_USED.UP Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_S_RING_AD_USED.UP_EVEN Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity.
    UNC_S_RING_AD_USED.UP_ODD Counts the number of cycles that the AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity.
    UNC_S_RING_AK_USED.DOWN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_S_RING_AK_USED.DOWN_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarity.
    UNC_S_RING_AK_USED.DOWN_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity.
    UNC_S_RING_AK_USED.UP Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_S_RING_AK_USED.UP_EVEN Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity.
    UNC_S_RING_AK_USED.UP_ODD Counts the number of cycles that the AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity.
    UNC_S_RING_BL_USED.DOWN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_S_RING_BL_USED.DOWN_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Event ring polarity.
    UNC_S_RING_BL_USED.DOWN_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Down and Odd ring polarity.
    UNC_S_RING_BL_USED.UP Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.
    UNC_S_RING_BL_USED.UP_EVEN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Even ring polarity.
    UNC_S_RING_BL_USED.UP_ODD Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. We really have two rings in HSX -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the "UP" direction is on the clockwise ring and "DN" is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.; Filters for the Up and Odd ring polarity.
    UNC_S_RING_BOUNCES.AD_CACHE Number of LLC responses that bounced on the Ring.
    UNC_S_RING_BOUNCES.AK_CORE Number of LLC responses that bounced on the Ring.; Acknowledgements to core
    UNC_S_RING_BOUNCES.BL_CORE Number of LLC responses that bounced on the Ring.; Data Responses to core
    UNC_S_RING_BOUNCES.IV_CORE Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.
    UNC_S_RING_IV_USED.DN Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. There is only 1 IV ring in HSX. Therefore, if one wants to monitor the "Even" ring, they should select both UP_EVEN and DN_EVEN. To monitor the "Odd" ring, they should select both UP_ODD and DN_ODD.; Filters any polarity
    UNC_S_RING_IV_USED.UP Counts the number of cycles that the BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sent, but does not include when packets are being sunk into the ring stop. There is only 1 IV ring in HSX. Therefore, if one wants to monitor the "Even" ring, they should select both UP_EVEN and DN_EVEN. To monitor the "Odd" ring, they should select both UP_ODD and DN_ODD.; Filters any polarity
    UNC_S_RING_SINK_STARVED.AD_CACHE
    UNC_S_RING_SINK_STARVED.AK_CORE
    UNC_S_RING_SINK_STARVED.BL_CORE
    UNC_S_RING_SINK_STARVED.IV_CORE
    UNC_S_RxR_BUSY_STARVED.AD_BNC Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress because a message (credited/bounceable) is being sent.
    UNC_S_RxR_BUSY_STARVED.AD_CRD Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress because a message (credited/bounceable) is being sent.
    UNC_S_RxR_BUSY_STARVED.BL_BNC Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress because a message (credited/bounceable) is being sent.
    UNC_S_RxR_BUSY_STARVED.BL_CRD Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress because a message (credited/bounceable) is being sent.
    UNC_S_RxR_BYPASS.AD_BNC Bypass the Sbo Ingress.
    UNC_S_RxR_BYPASS.AD_CRD Bypass the Sbo Ingress.
    UNC_S_RxR_BYPASS.AK Bypass the Sbo Ingress.
    UNC_S_RxR_BYPASS.BL_BNC Bypass the Sbo Ingress.
    UNC_S_RxR_BYPASS.BL_CRD Bypass the Sbo Ingress.
    UNC_S_RxR_BYPASS.IV Bypass the Sbo Ingress.
    UNC_S_RxR_CRD_STARVED.AD_BNC Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress due to lack of credit.
    UNC_S_RxR_CRD_STARVED.AD_CRD Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress due to lack of credit.
    UNC_S_RxR_CRD_STARVED.AK Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress due to lack of credit.
    UNC_S_RxR_CRD_STARVED.BL_BNC Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress due to lack of credit.
    UNC_S_RxR_CRD_STARVED.BL_CRD Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress due to lack of credit.
    UNC_S_RxR_CRD_STARVED.IFV Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress due to lack of credit.
    UNC_S_RxR_CRD_STARVED.IV Counts injection starvation. This starvation is triggered when the Ingress cannot send a transaction onto the ring for a long period of time. In this case, the Ingress but unable to forward to Egress due to lack of credit.
    UNC_S_RxR_INSERTS.AD_BNC Number of allocations into the Sbo Ingress The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_INSERTS.AD_CRD Number of allocations into the Sbo Ingress The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_INSERTS.AK Number of allocations into the Sbo Ingress The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_INSERTS.BL_BNC Number of allocations into the Sbo Ingress The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_INSERTS.BL_CRD Number of allocations into the Sbo Ingress The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_INSERTS.IV Number of allocations into the Sbo Ingress The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_OCCUPANCY.AD_BNC Occupancy event for the Ingress buffers in the Sbo. The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_OCCUPANCY.AD_CRD Occupancy event for the Ingress buffers in the Sbo. The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_OCCUPANCY.AK Occupancy event for the Ingress buffers in the Sbo. The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_OCCUPANCY.BL_BNC Occupancy event for the Ingress buffers in the Sbo. The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_OCCUPANCY.BL_CRD Occupancy event for the Ingress buffers in the Sbo. The Ingress is used to queue up requests received from the ring.
    UNC_S_RxR_OCCUPANCY.IV Occupancy event for the Ingress buffers in the Sbo. The Ingress is used to queue up requests received from the ring.
    UNC_S_TxR_ADS_USED.AD
    UNC_S_TxR_ADS_USED.AK
    UNC_S_TxR_ADS_USED.BL
    UNC_S_TxR_INSERTS.AD_BNC Number of allocations into the Sbo Egress. The Egress is used to queue up requests destined for the ring.
    UNC_S_TxR_INSERTS.AD_CRD Number of allocations into the Sbo Egress. The Egress is used to queue up requests destined for the ring.
    UNC_S_TxR_INSERTS.AK Number of allocations into the Sbo Egress. The Egress is used to queue up requests destined for the ring.
    UNC_S_TxR_INSERTS.BL_BNC Number of allocations into the Sbo Egress. The Egress is used to queue up requests destined for the ring.
    UNC_S_TxR_INSERTS.BL_CRD Number of allocations into the Sbo Egress. The Egress is used to queue up requests destined for the ring.
    UNC_S_TxR_INSERTS.IV Number of allocations into the Sbo Egress. The Egress is used to queue up requests destined for the ring.
    UNC_S_TxR_OCCUPANCY.AD_BNC Occupancy event for the Egress buffers in the Sbo. The egress is used to queue up requests destined for the ring.
    UNC_S_TxR_OCCUPANCY.AD_CRD Occupancy event for the Egress buffers in the Sbo. The egress is used to queue up requests destined for the ring.
    UNC_S_TxR_OCCUPANCY.AK Occupancy event for the Egress buffers in the Sbo. The egress is used to queue up requests destined for the ring.
    UNC_S_TxR_OCCUPANCY.BL_BNC Occupancy event for the Egress buffers in the Sbo. The egress is used to queue up requests destined for the ring.
    UNC_S_TxR_OCCUPANCY.BL_CRD Occupancy event for the Egress buffers in the Sbo. The egress is used to queue up requests destined for the ring.
    UNC_S_TxR_OCCUPANCY.IV Occupancy event for the Egress buffers in the Sbo. The egress is used to queue up requests destined for the ring.
    UNC_S_TxR_STARVED.AD Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.
    UNC_S_TxR_STARVED.AK Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.
    UNC_S_TxR_STARVED.BL Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.
    UNC_S_TxR_STARVED.IV Counts injection starvation. This starvation is triggered when the Egress cannot send a transaction onto the ring for a long period of time.
    UNC_U_CLOCKTICKS
    UNC_U_EVENT_MSG.DOORBELL_RCVD Virtual Logical Wire (legacy) message were received from Uncore. Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.
    UNC_U_FILTER_MATCH.DISABLE Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.
    UNC_U_FILTER_MATCH.ENABLE Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.
    UNC_U_FILTER_MATCH.U2C_DISABLE Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.
    UNC_U_FILTER_MATCH.U2C_ENABLE Filter match per thread (w/ or w/o Filter Enable). Specify the thread to filter on using NCUPMONCTRLGLCTR.ThreadID.
    UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK PHOLD cycles. Filter from source CoreID.
    UNC_U_RACU_REQUESTS Number outstanding register requests within message channel tracker
    UNC_U_U2C_EVENTS.CMC Events coming from Uncore can be sent to one or all cores
    UNC_U_U2C_EVENTS.LIVELOCK Events coming from Uncore can be sent to one or all cores; Filter by core
    UNC_U_U2C_EVENTS.LTERROR Events coming from Uncore can be sent to one or all cores; Filter by core
    UNC_U_U2C_EVENTS.MONITOR_T0 Events coming from Uncore can be sent to one or all cores; Filter by core
    UNC_U_U2C_EVENTS.MONITOR_T1 Events coming from Uncore can be sent to one or all cores; Filter by core
    UNC_U_U2C_EVENTS.OTHER Events coming from Uncore can be sent to one or all cores; PREQ, PSMI, P2U, Thermal, PCUSMI, PMI
    UNC_U_U2C_EVENTS.TRAP Events coming from Uncore can be sent to one or all cores
    UNC_U_U2C_EVENTS.UMC Events coming from Uncore can be sent to one or all cores
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_HIT.ANY_RESPONSE Counts demand data reads that hit in the L3
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts demand data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_HIT.SNOOP_MISS Counts demand data reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts demand data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_HIT.HITM_OTHER_CORE Counts demand data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_MISS.ANY_RESPONSE Counts demand data reads that miss in the L3
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_MISS.LOCAL_DRAM Counts demand data reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_MISS.REMOTE_DRAM Counts demand data reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_MISS.ANY_DRAM Counts demand data reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_MISS.REMOTE_HITM Counts demand data reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=DEMAND_DATA_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts demand data reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_HIT.ANY_RESPONSE Counts all demand data writes (RFOs) that hit in the L3
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_HIT.NO_SNOOP_NEEDED Counts all demand data writes (RFOs) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_HIT.SNOOP_MISS Counts all demand data writes (RFOs) that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand data writes (RFOs) that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_HIT.HITM_OTHER_CORE Counts all demand data writes (RFOs) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_MISS.ANY_RESPONSE Counts all demand data writes (RFOs) that miss in the L3
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_MISS.LOCAL_DRAM Counts all demand data writes (RFOs) that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_MISS.REMOTE_DRAM Counts all demand data writes (RFOs) that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_MISS.ANY_DRAM Counts all demand data writes (RFOs) that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_MISS.REMOTE_HITM Counts all demand data writes (RFOs) that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=DEMAND_RFO: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all demand data writes (RFOs) that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_HIT.ANY_RESPONSE Counts all demand code reads that hit in the L3
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts all demand code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_HIT.SNOOP_MISS Counts all demand code reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_HIT.HITM_OTHER_CORE Counts all demand code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_MISS.ANY_RESPONSE Counts all demand code reads that miss in the L3
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_MISS.LOCAL_DRAM Counts all demand code reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_MISS.REMOTE_DRAM Counts all demand code reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_MISS.ANY_DRAM Counts all demand code reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_MISS.REMOTE_HITM Counts all demand code reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=DEMAND_CODE_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all demand code reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=COREWB: response=LLC_HIT.ANY_RESPONSE Counts writebacks (modified to exclusive) that hit in the L3
    OFFCORE_RESPONSE:request=COREWB: response=LLC_HIT.NO_SNOOP_NEEDED Counts writebacks (modified to exclusive) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=COREWB: response=LLC_HIT.SNOOP_MISS Counts writebacks (modified to exclusive) that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=COREWB: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts writebacks (modified to exclusive) that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=COREWB: response=LLC_HIT.HITM_OTHER_CORE Counts writebacks (modified to exclusive) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=COREWB: response=LLC_MISS.ANY_RESPONSE Counts writebacks (modified to exclusive) that miss in the L3
    OFFCORE_RESPONSE:request=COREWB: response=LLC_MISS.LOCAL_DRAM Counts writebacks (modified to exclusive) that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=COREWB: response=LLC_MISS.REMOTE_DRAM Counts writebacks (modified to exclusive) that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=COREWB: response=LLC_MISS.ANY_DRAM Counts writebacks (modified to exclusive) that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=COREWB: response=LLC_MISS.REMOTE_HITM Counts writebacks (modified to exclusive) that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=COREWB: response=LLC_MISS.REMOTE_HIT_FORWARD Counts writebacks (modified to exclusive) that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_HIT.ANY_RESPONSE Counts prefetch (that bring data to L2) data reads that hit in the L3
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts prefetch (that bring data to L2) data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_HIT.SNOOP_MISS Counts prefetch (that bring data to L2) data reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts prefetch (that bring data to L2) data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_HIT.HITM_OTHER_CORE Counts prefetch (that bring data to L2) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_MISS.ANY_RESPONSE Counts prefetch (that bring data to L2) data reads that miss in the L3
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_MISS.LOCAL_DRAM Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_MISS.REMOTE_DRAM Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_MISS.ANY_DRAM Counts prefetch (that bring data to L2) data reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_MISS.REMOTE_HITM Counts prefetch (that bring data to L2) data reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_L2_DATA_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts prefetch (that bring data to L2) data reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_HIT.ANY_RESPONSE Counts all prefetch (that bring data to L2) RFOs that hit in the L3
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_HIT.NO_SNOOP_NEEDED Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_HIT.SNOOP_MISS Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_HIT.HITM_OTHER_CORE Counts all prefetch (that bring data to L2) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_MISS.ANY_RESPONSE Counts all prefetch (that bring data to L2) RFOs that miss in the L3
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_MISS.LOCAL_DRAM Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_MISS.REMOTE_DRAM Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_MISS.ANY_DRAM Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_MISS.REMOTE_HITM Counts all prefetch (that bring data to L2) RFOs that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_L2_RFO: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all prefetch (that bring data to L2) RFOs that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_HIT.ANY_RESPONSE Counts all prefetch (that bring data to LLC only) code reads that hit in the L3
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts all prefetch (that bring data to LLC only) code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_HIT.SNOOP_MISS Counts all prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_HIT.HITM_OTHER_CORE Counts all prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_MISS.ANY_RESPONSE Counts all prefetch (that bring data to LLC only) code reads that miss in the L3
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_MISS.LOCAL_DRAM Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_MISS.REMOTE_DRAM Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_MISS.ANY_DRAM Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_MISS.REMOTE_HITM Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_L2_CODE_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all prefetch (that bring data to LLC only) code reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_HIT.ANY_RESPONSE Counts all prefetch (that bring data to LLC only) data reads that hit in the L3
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_HIT.SNOOP_MISS Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_HIT.HITM_OTHER_CORE Counts all prefetch (that bring data to LLC only) data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_MISS.ANY_RESPONSE Counts all prefetch (that bring data to LLC only) data reads that miss in the L3
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_MISS.LOCAL_DRAM Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_MISS.REMOTE_DRAM Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_MISS.ANY_DRAM Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_MISS.REMOTE_HITM Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_LLC_DATA_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all prefetch (that bring data to LLC only) data reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_HIT.ANY_RESPONSE Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_HIT.NO_SNOOP_NEEDED Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_HIT.SNOOP_MISS Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_HIT.HITM_OTHER_CORE Counts all prefetch (that bring data to LLC only) RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_MISS.ANY_RESPONSE Counts all prefetch (that bring data to LLC only) RFOs that miss in the L3
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_MISS.LOCAL_DRAM Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_MISS.REMOTE_DRAM Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_MISS.ANY_DRAM Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_MISS.REMOTE_HITM Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_LLC_RFO: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all prefetch (that bring data to LLC only) RFOs that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_HIT.ANY_RESPONSE Counts prefetch (that bring data to LLC only) code reads that hit in the L3
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts prefetch (that bring data to LLC only) code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_HIT.SNOOP_MISS Counts prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_HIT.HITM_OTHER_CORE Counts prefetch (that bring data to LLC only) code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_MISS.ANY_RESPONSE Counts prefetch (that bring data to LLC only) code reads that miss in the L3
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_MISS.LOCAL_DRAM Counts prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_MISS.REMOTE_DRAM Counts prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_MISS.ANY_DRAM Counts prefetch (that bring data to LLC only) code reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_MISS.REMOTE_HITM Counts prefetch (that bring data to LLC only) code reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=PF_LLC_CODE_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts prefetch (that bring data to LLC only) code reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=OTHER: response=LLC_HIT.ANY_RESPONSE Counts any other requests that hit in the L3
    OFFCORE_RESPONSE:request=OTHER: response=LLC_HIT.NO_SNOOP_NEEDED Counts any other requests that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=OTHER: response=LLC_HIT.SNOOP_MISS Counts any other requests that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=OTHER: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts any other requests that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=OTHER: response=LLC_HIT.HITM_OTHER_CORE Counts any other requests that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=OTHER: response=LLC_MISS.ANY_RESPONSE Counts any other requests that miss in the L3
    OFFCORE_RESPONSE:request=OTHER: response=LLC_MISS.LOCAL_DRAM Counts any other requests that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=OTHER: response=LLC_MISS.REMOTE_DRAM Counts any other requests that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=OTHER: response=LLC_MISS.ANY_DRAM Counts any other requests that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=OTHER: response=LLC_MISS.REMOTE_HITM Counts any other requests that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=OTHER: response=LLC_MISS.REMOTE_HIT_FORWARD Counts any other requests that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_HIT.ANY_RESPONSE Counts all prefetch data reads that hit in the L3
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts all prefetch data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_HIT.SNOOP_MISS Counts all prefetch data reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_HIT.HITM_OTHER_CORE Counts all prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_MISS.ANY_RESPONSE Counts all prefetch data reads that miss in the L3
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_MISS.LOCAL_DRAM Counts all prefetch data reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_MISS.REMOTE_DRAM Counts all prefetch data reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_MISS.ANY_DRAM Counts all prefetch data reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_MISS.REMOTE_HITM Counts all prefetch data reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_PF_DATA_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all prefetch data reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_HIT.ANY_RESPONSE Counts prefetch RFOs that hit in the L3
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_HIT.NO_SNOOP_NEEDED Counts prefetch RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_HIT.SNOOP_MISS Counts prefetch RFOs that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts prefetch RFOs that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_HIT.HITM_OTHER_CORE Counts prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_MISS.ANY_RESPONSE Counts prefetch RFOs that miss in the L3
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_MISS.LOCAL_DRAM Counts prefetch RFOs that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_MISS.REMOTE_DRAM Counts prefetch RFOs that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_MISS.ANY_DRAM Counts prefetch RFOs that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_MISS.REMOTE_HITM Counts prefetch RFOs that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_PF_RFO: response=LLC_MISS.REMOTE_HIT_FORWARD Counts prefetch RFOs that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_HIT.ANY_RESPONSE Counts all prefetch code reads that hit in the L3
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts all prefetch code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_HIT.SNOOP_MISS Counts all prefetch code reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all prefetch code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_HIT.HITM_OTHER_CORE Counts all prefetch code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_MISS.ANY_RESPONSE Counts all prefetch code reads that miss in the L3
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_MISS.LOCAL_DRAM Counts all prefetch code reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_MISS.REMOTE_DRAM Counts all prefetch code reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_MISS.ANY_DRAM Counts all prefetch code reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_MISS.REMOTE_HITM Counts all prefetch code reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_PF_CODE_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all prefetch code reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_HIT.ANY_RESPONSE Counts all demand & prefetch data reads that hit in the L3
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts all demand & prefetch data reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_HIT.SNOOP_MISS Counts all demand & prefetch data reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand & prefetch data reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_HIT.HITM_OTHER_CORE Counts all demand & prefetch data reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_MISS.ANY_RESPONSE Counts all demand & prefetch data reads that miss in the L3
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_MISS.LOCAL_DRAM Counts all demand & prefetch data reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_MISS.REMOTE_DRAM Counts all demand & prefetch data reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_MISS.ANY_DRAM Counts all demand & prefetch data reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_MISS.REMOTE_HITM Counts all demand & prefetch data reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_DATA_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all demand & prefetch data reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_HIT.ANY_RESPONSE Counts all demand & prefetch RFOs that hit in the L3
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_HIT.NO_SNOOP_NEEDED Counts all demand & prefetch RFOs that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_HIT.SNOOP_MISS Counts all demand & prefetch RFOs that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand & prefetch RFOs that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_HIT.HITM_OTHER_CORE Counts all demand & prefetch RFOs that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_MISS.ANY_RESPONSE Counts all demand & prefetch RFOs that miss in the L3
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_MISS.LOCAL_DRAM Counts all demand & prefetch RFOs that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_MISS.REMOTE_DRAM Counts all demand & prefetch RFOs that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_MISS.ANY_DRAM Counts all demand & prefetch RFOs that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_MISS.REMOTE_HITM Counts all demand & prefetch RFOs that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_RFO: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all demand & prefetch RFOs that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_HIT.ANY_RESPONSE Counts all demand & prefetch code reads that hit in the L3
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_HIT.NO_SNOOP_NEEDED Counts all demand & prefetch code reads that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_HIT.SNOOP_MISS Counts all demand & prefetch code reads that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all demand & prefetch code reads that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_HIT.HITM_OTHER_CORE Counts all demand & prefetch code reads that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_MISS.ANY_RESPONSE Counts all demand & prefetch code reads that miss in the L3
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_MISS.LOCAL_DRAM Counts all demand & prefetch code reads that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_MISS.REMOTE_DRAM Counts all demand & prefetch code reads that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_MISS.ANY_DRAM Counts all demand & prefetch code reads that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_MISS.REMOTE_HITM Counts all demand & prefetch code reads that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_CODE_RD: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all demand & prefetch code reads that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_HIT.ANY_RESPONSE Counts all data/code/rfo reads (demand & prefetch) that hit in the L3
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_HIT.NO_SNOOP_NEEDED Counts all data/code/rfo reads (demand & prefetch) that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_HIT.SNOOP_MISS Counts all data/code/rfo reads (demand & prefetch) that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all data/code/rfo reads (demand & prefetch) that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_HIT.HITM_OTHER_CORE Counts all data/code/rfo reads (demand & prefetch) that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_MISS.ANY_RESPONSE Counts all data/code/rfo reads (demand & prefetch) that miss in the L3
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_MISS.LOCAL_DRAM Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_MISS.REMOTE_DRAM Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_MISS.ANY_DRAM Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_MISS.REMOTE_HITM Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_READS: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all data/code/rfo reads (demand & prefetch) that miss the L3 and clean or shared data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_HIT.ANY_RESPONSE Counts all requests that hit in the L3
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_HIT.NO_SNOOP_NEEDED Counts all requests that hit in the L3 and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_HIT.SNOOP_MISS Counts all requests that hit in the L3 and the snoops sent to sibling cores return clean response
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_HIT.HIT_OTHER_CORE_NO_FWD Counts all requests that hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_HIT.HITM_OTHER_CORE Counts all requests that hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_MISS.ANY_RESPONSE Counts all requests that miss in the L3
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_MISS.LOCAL_DRAM Counts all requests that miss the L3 and the data is returned from local dram
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_MISS.REMOTE_DRAM Counts all requests that miss the L3 and the data is returned from remote dram
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_MISS.ANY_DRAM Counts all requests that miss the L3 and the data is returned from local or remote dram
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_MISS.REMOTE_HITM Counts all requests that miss the L3 and the modified data is transferred from remote cache
    OFFCORE_RESPONSE:request=ALL_REQUESTS: response=LLC_MISS.REMOTE_HIT_FORWARD Counts all requests that miss the L3 and clean or shared data is transferred from remote cache