LiquidBounce integrates deep learning capabilities through DJL (Deep Java Library), enabling AI-powered combat and movement features. The system uses PyTorch models for real-time predictions.
Architecture
DeepLearningEngine
Location: deeplearn/DeepLearningEngine.kt:31
The core engine manages PyTorch initialization and model storage:
object DeepLearningEngine {
var isInitialized = false
private set
private val deepLearningFolder: File // .minecraft/LiquidBounce/deeplearning/
val djlCacheFolder: File // DJL library cache
val enginesCacheFolder: File // PyTorch binaries
val modelsFolder: File // Trained models
suspend fun init(task: Task)
}
ModelManager
Location: deeplearn/ModelManager.kt:33
Manages available models and provides switching:
object ModelManager : EventListener, ValueGroup("AI") {
val combatModels = arrayOf("21KC11KP", "19KC8KP")
val models = modes(this, "Model", 0) { modeValueGroup ->
allCombatModels.mapToArray { name ->
TwoDimensionalRegressionModel(name, modeValueGroup)
}
}
fun load()
fun unload()
fun reload()
}
Initialization
The engine initializes asynchronously to avoid blocking the game:
suspend fun init(task: Task) {
logger.info("Initializing engine...")
val engine = withContext(Dispatchers.IO) {
Engine.getInstance() // Downloads PyTorch if needed
}
val name = engine.engineName
val version = engine.version
val deviceType = engine.defaultDevice().deviceType
logger.info("Using deep learning engine $name $version on $deviceType.")
isInitialized = true
}
Engine Configuration
PyTorch is configured at startup:
init {
System.setProperty("DJL_CACHE_DIR", djlCacheFolder.absolutePath)
System.setProperty("ENGINE_CACHE_DIR", enginesCacheFolder.absolutePath)
System.setProperty("OPT_OUT_TRACKING", "true") // Disable telemetry
// Use PyTorch CPU flavor (avoid CUDA conflicts)
System.setProperty("DJL_DEFAULT_ENGINE", "PyTorch")
System.setProperty("PYTORCH_FLAVOR", "cpu")
}
CUDA Support: The system uses CPU-based PyTorch to avoid conflicts with GPU drivers and reduce download size. CUDA acceleration is intentionally disabled.
Model Types
TwoDimensionalRegressionModel
Location: deeplearn/models/TwoDimensionalRegressionModel.kt
Used for predicting 2D coordinates (e.g., optimal aim position):
class TwoDimensionalRegressionModel(
name: String,
parent: ChoiceConfigurable<*>
) : ModelChoice(name, parent) {
private var predictor: Predictor<NDArray, FloatArray>? = null
fun predict(input: FloatArray): Vec2f {
val ndInput = manager.create(input)
val output = predictor!!.predict(ndInput)
return Vec2f(output[0], output[1])
}
}
Models are stored as PyTorch TorchScript files:
LiquidBounce/deeplearning/models/
├── 21KC11KP/
│ ├── model.pt # TorchScript model
│ └── metadata.json # Model info
└── 19KC8KP/
├── model.pt
└── metadata.json
Data Pipeline
Translators
Location: deeplearn/translators/
Translators convert between game data and model inputs:
class CombatDataTranslator : Translator<CombatData, FloatArray> {
override fun processInput(ctx: TranslatorContext, input: CombatData): NDList {
val features = floatArrayOf(
input.distance.toFloat(),
input.playerVelocityX.toFloat(),
input.playerVelocityY.toFloat(),
input.playerVelocityZ.toFloat(),
input.targetVelocityX.toFloat(),
input.targetVelocityY.toFloat(),
input.targetVelocityZ.toFloat(),
input.yawDifference.toFloat(),
input.pitchDifference.toFloat()
)
return NDList(ctx.ndManager.create(features))
}
override fun processOutput(ctx: TranslatorContext, list: NDList): FloatArray {
return list[0].toFloatArray()
}
}
Feature Engineering
Location: deeplearn/data/
Collect and normalize game state:
data class CombatData(
val distance: Double,
val playerVelocityX: Double,
val playerVelocityY: Double,
val playerVelocityZ: Double,
val targetVelocityX: Double,
val targetVelocityY: Double,
val targetVelocityZ: Double,
val yawDifference: Float,
val pitchDifference: Float
) {
fun normalize(): CombatData {
// Normalize features to [0, 1] range
return copy(
distance = (distance / 6.0).coerceIn(0.0, 1.0),
playerVelocityX = (playerVelocityX + 1.0) / 2.0,
// ...
)
}
}
Usage in Combat
Integrating AI predictions into KillAura:
class ModuleKillAura : ClientModule("KillAura", Category.COMBAT) {
val useAI by boolean("UseAI", false)
val rotationUpdateHandler = handler<RotationUpdateEvent> { event ->
if (!useAI || !DeepLearningEngine.isInitialized) {
return@handler
}
val target = currentTarget ?: return@handler
// Collect features
val data = CombatData(
distance = player.distanceTo(target),
playerVelocityX = player.velocity.x,
playerVelocityY = player.velocity.y,
playerVelocityZ = player.velocity.z,
targetVelocityX = target.velocity.x,
targetVelocityY = target.velocity.y,
targetVelocityZ = target.velocity.z,
yawDifference = /* ... */,
pitchDifference = /* ... */
).normalize()
// Get AI prediction
val model = ModelManager.models.activeMode as TwoDimensionalRegressionModel
val prediction = model.predict(data.toFloatArray())
// Apply prediction
event.rotation = Rotation(
yaw = prediction.x,
pitch = prediction.y
)
}
}
Training Models
Models are trained offline using collected data:
Data Collection
val trainingDataCollector = handler<AttackEntityEvent> { event ->
if (collectingData) {
val data = CombatData(/* ... */).normalize()
val label = floatArrayOf(actualYaw, actualPitch)
trainingDataset.add(Pair(data, label))
}
}
Training Pipeline (External)
import torch
import torch.nn as nn
class AimPredictor(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(9, 64),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(64, 32),
nn.ReLU(),
nn.Linear(32, 2) # Yaw, Pitch
)
def forward(self, x):
return self.layers(x)
# Train model
model = AimPredictor()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
criterion = nn.MSELoss()
for epoch in range(100):
for features, labels in dataloader:
optimizer.zero_grad()
outputs = model(features)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# Export to TorchScript
scripted = torch.jit.script(model)
scripted.save("model.pt")
Inference Time
Models are optimized for real-time inference:
fun load() {
runCatching {
measureTime {
model.load()
}
}.onSuccess { time ->
logger.info("Loaded model '${model.name}' in ${time.inWholeMilliseconds}ms.")
}
}
Typical inference time: 1-3ms on CPU
Memory Usage
Models are lazy-loaded:
private var predictor: Predictor<NDArray, FloatArray>? = null
fun load() {
if (predictor == null) {
predictor = model.newPredictor(translator)
}
}
fun close() {
predictor?.close()
predictor = null
}
Model Management
Loading Custom Models
Place models in .minecraft/LiquidBounce/deeplearning/models/:
models/
└── MyCustomModel/
├── model.pt # Required: TorchScript model
└── metadata.json # Optional: Model info
The system automatically discovers and loads custom models:
private val availableCombatModels: List<String>
get() = modelsFolder
.listFiles { file -> file.isDirectory }
?.map { file -> file.nameWithoutExtension } ?: emptyList()
Switching Models
Models can be switched at runtime:
// Via config UI
ModelManager.models.setByString("21KC11KP")
// Programmatically
ModelManager.reload()
Debugging
Enable Logging
private val logger: Logger = LogManager.getLogger("LiquidBounce/AI")
logger.info("Model prediction: yaw=${pred.x}, pitch=${pred.y}")
logger.debug("Input features: ${features.contentToString()}")
Validate Predictions
val prediction = model.predict(input)
require(prediction.x in -180f..180f) { "Invalid yaw: ${prediction.x}" }
require(prediction.y in -90f..90f) { "Invalid pitch: ${prediction.y}" }
Limitations
Current Limitations:
- CPU-only inference (no GPU acceleration)
- Limited to regression models (classification support planned)
- Models must be TorchScript format
- No online learning (models trained offline)
Best Practices
Model Versioning
Use meaningful model names:
21KC11KP → 2021 KillAura Classifier v11 KP-trained
19KC8KP → 2019 KillAura Classifier v8 KP-trained
Fallback Logic
Always provide non-AI fallback:
val rotation = if (useAI && DeepLearningEngine.isInitialized) {
aiPredict()
} else {
traditionalAim()
}
Sanitize features before prediction:
fun sanitize(data: CombatData): CombatData {
return data.copy(
distance = data.distance.coerceIn(0.0, 6.0),
// Ensure no NaN or Inf values
)
}