Articles on: Game :: Minecraft Java

How to Use Spark Profiler to Diagnose Minecraft Server Lag & Performance


What Is Spark?

Spark is a performance profiling plugin/mod for Minecraft servers (Forge, Fabric, or Bukkit/Spigot-based) that helps you diagnose lag, memory issues, TPS drops, and other bottlenecks.


Key features:

  • Real-time TPS / MSPT / CPU / memory monitoring
  • Profiling of server threads, method calls, and memory allocations
  • Heap dumps / memory summaries / GC info
  • Web-based viewer that presents the profiling data in a readable format

Spark is widely used by hosting providers to assist users with performance diagnostics.


Installation of Spark

The installation process depends on your server type (modded / plugin). Here’s how it’s typically done:

Server Type

Steps to Install

Spigot / Paper / Bukkit (plugin)

Download the Spark plugin JAR (from SpigotMC or official Spark) and place it in the plugins/ folder.

Forge / Fabric (modded)

Download the Spark mod JAR version matching your Minecraft and mod loader version. Upload it to the mods/ folder.

After placing the JAR, restart your server fully to activate Spark.


Basic Spark Commands & Usage

Once Spark is installed, you can use specific commands to profile and inspect server performance.


Common commands

Command

Purpose

/spark profiler start

Begin profiling (default mode)

/spark profiler stop

Stop profiling and generate a report (you’ll get a link)

/spark profiler info

Check whether the profiler is currently running

/spark profiler start --timeout <seconds>

Run profiler for a set number of seconds

/spark profiler start --thread *

Profile all threads

/spark profiler start --alloc

Profile memory allocations instead of CPU usage

Other useful commands:

  • /spark healthreport — full snapshot of server health (TPS, memory, CPU, disk, JVM args)
  • /spark gc — analyze Garbage Collection activity
  • /spark heapsummary — get a memory summary / snapshot


Profiling Workflow & Best Practices

  • When to profile - Run Spark while the lag or performance issue is happening (or soon after) so that Spark can record the relevant activity.
  • Duration
    • For general profiling, 30–60 seconds is enough to gather meaningful data.
    • If lag is sporadic, use flags like --only-ticks-over <ms> to only record slow ticks.
  • Stopping & retrieving report - After stopping the profiler, you’ll receive a URL (via console or chat). Open that in your browser to view the profiling report.
  • Reading the report
    • The report shows threads, call frames, execution times, and percentages.
    • Expand the “Server thread” section to see which operations (e.g. tick, plugins, entity updates) are consuming time.
    • Hover over nodes to see ms values; percentages show relative cost.
    • Use the flame graph or “flat view” to isolate performance hot spots.
  • Share with support or devs

You can share the generated report URL for help diagnosing issues.


Common Use Cases & Examples

  • Consistent TPS drop → run /spark profiler start --timeout 30/spark profiler stop → inspect which tasks are dominating time
  • Lag spikes / occasional stutters → use /spark profiler start --only-ticks-over 70 --timeout 60 (or adjust threshold)
  • Memory leaks or GC issues → use /spark heapsummary or /spark gc and review memory usage breakdowns
  • Server health checks/spark healthreport for an overview of system state


Example (Step-by-Step)

  1. Install Spark as plugin or mod (see above).
  2. Wait until lag or performance degradation is happening.
  3. Run in console:
/spark profiler start --timeout 30
  1. Wait 30 seconds.
  2. Run:
/spark profiler stop
  1. Copy the link provided in console.
  2. Open link to view report.
  3. Expand “Server thread” and follow nodes with high % to identify troublemakers (plugins, entity ticks, chunk loads, etc.).
  4. Use memory or GC commands if needed.
  5. Use the info to optimize your setup, remove heavy plugins, or share with support.


Troubleshooting & Tips

  • If no report link appears, ensure permissions are correct and Spark loaded successfully.
  • If profiling doesn’t collect data, try longer duration or different flags (e.g. --thread *).
  • If class/method names are obfuscated, enable deobfuscation mappings in the Spark viewer.
  • Use the profiler selectively, not continuously — while Spark is lightweight, constant profiling can have minor overhead.
  • Don’t forget to restart the server after installing Spark to ensure it’s fully loaded.
  • Use multiple profiles if your issue is intermittent.



Updated on: 13/10/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!