Recent reports from other groups and our own results show that the GPU-based computations are oftentimes ~10-250 faster than some of the heavily optimized CPU-based implementations. In recent years, we developed computational methodology for the generation of random numbers, calculation of forces, and numerical solution of Langevin equations on the GPU device (see Figure). We have managed to incorporate the numerical algorithms into a new software (SOP-GPU) for Langevin simulations of biomolecules on a GPU, using the \(C_\alpha\)-based coarse-grained Self Organized Polymer (SOP) model of a protein. This package will enable us to perform dynamic force measurements in silico, which mimic the AFM- and optical tweezer-based dynamic force experiments in vitro, on large-size supramolecular assemblies (\(10^4-10^6\) residues) in the experimental millisecond to second timescale. We are now working on adapting the SOP-GPU package to new generation GPUs with MIMD architecture (Multiple Instruction Multiple Data). These efforts will allow for the interpretation the experimental data, e.g., for the mechanical unfolding of proteins and for the forced indentation of viral capsids, using the simulation results. We have also developed the GPU-based implementation of all-atom MD simulations of biomolecules in implicit water, using EEF1 and SASA models, which allow to obtain a ~50-200 speedup on a GPU, depending on the system size. A combination of these methods (Langevin and all-atom simulations) allow for multiscale modeling of biological processes. Using our multiple-runs-per-GPU approach, we are planning to combine these simulation methods with parallel tempering algorithms (Replica Exchange methods) to provide researchers with the theoretical tools needed to resolve the thermodynamics and to map the free energy landscape of biomolecular processes. We are also planning to expand into the field of mixed quantum-classical descriptions of biochemical transitions in condensed phases.