Skip to main content

Overview

A Profile represents a loaded profile scoped to a single event type (cpu, wall, alloc, lock). Create profiles with the open() function.
p = open("profile.jfr")
print(p.samples)  # Total sample count
print(p.event)    # "cpu"

Properties

stacks
list[Stack]
List of all stacks in the profile. Each stack represents one unique call path.
for s in p.stacks:
    if s.has("HashMap"):
        print(s.leaf.name)
samples
int
Total sample count across all stacks.
print(p.samples)  # 1523
duration
float
Recording duration in seconds. Returns 0 for formats without duration metadata (collapsed text, some pprof producers).
print(p.duration)  # 60.5
start
float
Start time in seconds from recording start. For root profiles this is 0. For windowed profiles (from open() with start= or from split()), this is the window start offset.
p = open("profile.jfr", start="5s", end="10s")
print(p.start)  # 5.0
end
float
End time in seconds from recording start. For root profiles equals duration. For windowed profiles, this is the window end offset.
p = open("profile.jfr", start="5s", end="10s")
print(p.end)  # 10.0
event
string
Selected event type: "cpu", "wall", "alloc", or "lock".
p = open("profile.jfr", event="wall")
print(p.event)  # "wall"
events
list[string]
All event types available in the source file. JFR files may contain multiple event types.
p = open("multi.jfr")
for e in p.events:
    print(e)  # cpu, wall, alloc, lock
path
string
Path to the source file.
print(p.path)  # "profile.jfr"

Methods

hot()

Return top methods by self time or total time.
hot(n=all, fqn=False, sort="self") → list[Method]
n
int
default:"all"
Limit results to top N methods. 0 or omitted returns all methods.
fqn
bool
default:"False"
Use fully-qualified names (e.g. java.util.HashMap.resize) instead of short names (HashMap.resize). Changes aggregation granularity.
sort
string
default:"self"
Sort by "self" (leaf time) or "total" (inclusive time including callees).
Returns: List of Method objects with name, fqn, self, self_pct, total, total_pct.
for m in p.hot(10):
    print(rjust(round(m.self_pct, 1), 6) + "%  " + m.name)
  23.5%  HashMap.resize
  18.2%  ThreadPoolExecutor.runWorker
  12.1%  String.indexOf

threads()

Return thread sample distribution.
threads(n=all) → list[Thread]
n
int
default:"all"
Limit to top N threads by sample count.
Returns: List of Thread objects with name, samples, pct.
for th in p.threads(5):
    print(ljust(th.name, 30) + rjust(str(th.samples), 8))
worker-pool-1              452
worker-pool-2              387
main                       201

filter()

Return a new profile containing only stacks matching a predicate.
filter(fn) → Profile
fn
function(Stack) → bool
required
Predicate function. Stacks where fn(stack) returns True are kept.
Returns: New Profile with filtered stacks. Original profile is unchanged.
# Keep only stacks containing HashMap
filtered = p.filter(lambda s: s.has("HashMap"))
print(filtered.samples)  # Subset of p.samples

# Filter by thread
workers = p.filter(lambda s: s.thread_has("worker"))

# Chain filters
result = p.filter(lambda s: s.has("Server")).filter(lambda s: s.depth > 5)

group_by()

Partition profile into groups by a key function.
group_by(fn) → dict[string, Profile]
fn
function(Stack) → string | None
required
Key function. Stacks returning the same key are grouped together. None excludes the stack.
Returns: Dictionary mapping keys to Profile objects.
# Group by thread pool (extract prefix before dash)
groups = p.group_by(lambda s: s.thread.split("-")[0] if s.thread else None)
for name in sorted(groups.keys()):
    print(name + ": " + str(groups[name].samples))
worker: 823
io: 412
scheduler: 288

timeline()

Divide profile into time buckets. JFR only - requires per-sample timestamps.
timeline(resolution=None, buckets=None) → list[Bucket]
resolution
string | int | float
Bucket width. Accepts duration strings ("1s", "500ms"), or numeric seconds. Numeric values require keyword form: timeline(resolution=30).
buckets
int
Target number of buckets. Overrides resolution if specified.
Returns: List of Bucket objects with start, end, samples, label, stacks, profile.
First call to timeline() triggers a full re-parse of the JFR file with timestamps enabled. This is expensive for large files. Subsequent calls reuse cached data.
# 1-second buckets
buckets = p.timeline(resolution="1s")
for b in buckets:
    print(b.label + ": " + str(b.samples))
0.0s-1.0s: 142
1.0s-2.0s: 138
2.0s-3.0s: 151
# 10 equal-width buckets
buckets = p.timeline(buckets=10)

split()

Split profile at time boundaries. JFR only.
split(times) → list[Profile]
times
list[float | string]
required
Split points in seconds (floats) or duration strings ("5s", "1m30s"). Must be strictly increasing and within profile duration. Times are relative to the profile’s scope, not recording start.
Returns: len(times) + 1 profiles representing [0, t1), [t1, t2), …, [tn, end).
# Split at 5s and 10s → 3 profiles
parts = p.split([5.0, 10.0])
for i, part in enumerate(parts):
    print("Part " + str(i) + ": " + str(part.start) + "s-" + str(part.end) + "s")
Part 0: 0.0s-5.0s
Part 1: 5.0s-10.0s
Part 2: 10.0s-60.0s
# Mix floats and duration strings
parts = p.split(["5s", 10.5, "30s"])

# Split a windowed profile - times are relative to window start
windowed = open("profile.jfr", start="10s", end="20s")
parts = windowed.split([5.0])  # Splits at absolute 15s
print(parts[0].start)  # 10.0
print(parts[1].start)  # 15.0

tree()

Generate a call tree from a method.
tree(method="", depth=4, min_pct=1.0) → string
method
string
default:""
Root method name (substring match). Empty string starts from profile root.
depth
int
default:"4"
Maximum tree depth.
min_pct
float
default:"1.0"
Minimum percentage threshold to include branches.
print(p.tree("HashMap.get", depth=3))

trace()

Find the hottest path from a method.
trace(method, min_pct=0.5, fqn=False) → string
method
string
required
Starting method name (substring match).
min_pct
float
default:"0.5"
Minimum percentage threshold for path segments.
fqn
bool
default:"False"
Use fully-qualified names.
print(p.trace("HashMap.resize"))

callers()

Show callers of a method as a tree.
callers(method, depth=4, min_pct=1.0) → string
method
string
required
Target method name (substring match).
depth
int
default:"4"
Maximum tree depth toward root.
min_pct
float
default:"1.0"
Minimum percentage threshold.
print(p.callers("HashMap.resize"))

no_idle()

Remove idle leaf frames (same as CLI --no-idle).
no_idle() → Profile
Returns: New profile with idle frames removed.
active = p.no_idle()

summary()

Generate a one-line summary string.
summary() → string
Returns: Format: "event: N samples, Xs, M stacks".
print(p.summary())
# cpu: 1523 samples, 60.5s, 847 stacks

Examples

Compare time windows

p = open("profile.jfr")
parts = p.split([p.duration / 2])
d = diff(parts[0], parts[1])
for e in d.regressions:
    print(e.name + " +" + str(round(e.delta, 1)) + "%")

Find callers of a leaf method

p = open("profile.jfr")
callers = {}
for s in p.stacks:
    if s.leaf and s.leaf.name == "__sched_yield":
        below = s.below("__sched_yield")
        if len(below) > 0:
            c = below[-1].name  # Direct caller
            callers[c] = callers.get(c, 0) + s.samples

for name, count in sorted(callers.items(), key=lambda x: x[1], reverse=True):
    print(str(count) + "  " + name)

CI budget check

p = open("profile.jfr")
gc = p.filter(lambda s: s.has("GC") or s.has("FullGC"))
gc_pct = 100.0 * gc.samples / p.samples

if gc_pct > 15.0:
    fail("GC overhead exceeds budget: " + str(round(gc_pct, 1)) + "%")

print("GC: " + str(round(gc_pct, 1)) + "% ✓")

Build docs developers (and LLMs) love