Overview
A Bucket represents a time slice from profile.timeline(). Each bucket contains the stacks and samples that occurred within its time range.
Buckets are only available for JFR profiles with per-sample timestamps.
p = open("profile.jfr")
buckets = p.timeline(resolution="1s")
for b in buckets:
print(b.label + ": " + str(b.samples))
0.0s-1.0s: 142
1.0s-2.0s: 138
2.0s-3.0s: 151
Properties
Bucket start time in seconds from recording start.
Bucket end time in seconds from recording start.
Total sample count in this bucket.
Formatted time range label (e.g. "5.0s-6.0s" or "4m20.0s-4m30.0s").print(b.label) # "1.0s-2.0s"
Stacks that occurred in this time bucket. Lazily built when accessed.for s in b.stacks:
if s.has("HashMap"):
print(s.leaf.name)
Full Profile object wrapping this bucket’s data. Supports all Profile methods like filter(), hot(), tree(), etc.# Analyze this time slice as a standalone profile
top = b.profile.hot(5)
filtered = b.profile.filter(lambda s: s.has("GC"))
Methods
hot()
Return top methods for this bucket.
hot(n=5, sort="self") → list[Method]
Number of methods to return.
Sort by "self" or "total".
Returns: List of Method objects.
buckets = p.timeline(resolution="5s")
for b in buckets:
print("\n" + b.label + ":")
for m in b.hot(3):
print(" " + rjust(round(m.self_pct, 1), 5) + "% " + m.name)
0.0s-5.0s:
28.3% HashMap.resize
18.1% String.indexOf
12.5% System.arraycopy
5.0s-10.0s:
31.2% HashMap.get
22.4% ThreadPoolExecutor.runWorker
10.8% ByteBuffer.put
Examples
Detect phase changes
p = open("profile.jfr")
buckets = p.timeline(resolution="10s")
for b in buckets:
top = b.hot(1)
if len(top) > 0:
print(b.label + ": " + top[0].name + " (" + str(round(top[0].self_pct, 1)) + "%)")
Compare first vs last bucket
p = open("profile.jfr")
buckets = p.timeline(resolution="5s")
if len(buckets) >= 2:
first = buckets[0].profile
last = buckets[-1].profile
d = diff(first, last)
print("Regressions from start to end:")
for e in d.regressions[:5]:
print(" " + e.name + " +" + str(round(e.delta, 1)) + "%")
Find hottest time period
buckets = p.timeline(buckets=20)
hottest = max(buckets, key=lambda b: b.samples)
print("Hottest period: " + hottest.label)
print("Samples: " + str(hottest.samples))
print("\nTop methods:")
for m in hottest.hot(10):
print(" " + rjust(round(m.self_pct, 1), 5) + "% " + m.name)
Filter buckets by condition
# Find time periods with high GC activity
buckets = p.timeline(resolution="5s")
for b in buckets:
gc_stacks = [s for s in b.stacks if s.has("GC")]
gc_samples = sum(s.samples for s in gc_stacks)
gc_pct = 100.0 * gc_samples / b.samples if b.samples > 0 else 0
if gc_pct > 20.0:
print(b.label + ": GC " + str(round(gc_pct, 1)) + "%")
Windowed diff
# Compare consecutive 5s windows
buckets = p.timeline(resolution="5s")
for i in range(len(buckets) - 1):
d = diff(buckets[i].profile, buckets[i+1].profile)
if len(d.regressions) > 0:
print("\n" + buckets[i].label + " → " + buckets[i+1].label + ":")
for e in d.regressions[:3]:
print(" " + e.name + " +" + str(round(e.delta, 1)) + "%")
Export bucket as standalone profile
# Get the 10-15s window as a profile
buckets = p.timeline(resolution="1s")
target = buckets[10] # 10s bucket
# Use all Profile methods
print(target.profile.summary())
workers = target.profile.filter(lambda s: s.thread_has("worker"))
print(target.profile.tree("HashMap.get"))
Aggregate metrics across buckets
buckets = p.timeline(resolution="1s")
method_trends = {}
for i, b in enumerate(buckets):
for m in b.hot():
if m.name not in method_trends:
method_trends[m.name] = []
method_trends[m.name].append((i, m.self_pct))
# Find methods with increasing trend
for name, data in method_trends.items():
if len(data) >= 5:
first_avg = sum(pct for _, pct in data[:3]) / 3
last_avg = sum(pct for _, pct in data[-3:]) / 3
if last_avg > first_avg * 1.5:
print(name + " increased: " + str(round(first_avg, 1)) + "% → " + str(round(last_avg, 1)) + "%")