

A key ingredient of our system is a hierarchical block-tile-cell sparse grid data structure that is distributable to an arbitrary number of Message Passing Interface (MPI) ranks. The system is built upon a performance portable C++ programming model targeting major High-Performance-Computing (HPC) platforms. In this paper, we present a four-layer distributed simulation system and its adaptation to the Material Point Method (MPM). Our framework has proven powerful and intuitive enough for voluntary artist adoption and has delivered creature and FX simulations for multiple major movie productions in the preceding four years. We demonstrate a variety of solvers within the framework and their interactions, including FLIPstyle liquids, spatially adaptive volumetric fluids, SPH, MPM, and mesh-based solids, including but not limited to discrete elastic rods, elastons, and FEM with state-of-the-art constitutive models.

Distribution over MPI, custom linear equation solvers, and aggressive application of sparse techniques keep performance within production requirements.

We also provide a consistent treatment for components used in several domains, such as unified collision and attachment constraints across 1D, 2D, 3D deforming and rigid objects. This leads to intuitive setups for coupled simulations such as hair in the wind or objects transitioning from one representation to another, for example bulk water FLIP particles to SPH spray particles to volumetric mist. Loki adapts multiple best-in-class solvers into a unified framework driven by a declarative state machine where users declare 'what' is simulated but not 'when,' so an automatic scheduling system takes care of mixing any combination of objects. The alert message includes both the Mimir service and route experiencing the high latency.We introduce Loki, a new framework for robust simulation of fluid, rigid, and deformable objects with non-compromising fidelity on any single element, and capabilities for coupling and representation transitions across multiple elements. This alert fires when a specific Mimir route is experiencing an high latency. Scaling up distributors will lower the number of in-flight push requests per distributor. If the actual number of in-flight push requests is very close to or already hit the limit. Even if the alert is only for one ingester, it’s best to follow up by checking kubectl get pods -namespace= every few minutes, or looking at the query rate(kube_pod_container_status_restarts_total) Using GCS as object store (but similar procedures apply to other backends)įirst, check if the alert is for a single ingester or multiple.This document assumes that you are running a Mimir cluster: This document contains runbooks, or at least a checklist of what to look for, for alerts in the mimir-mixin and logs from Mimir.

Grafana Mimir Operator and user guide Runbooks Grafana Mimir runbooks
