top of page

What is it that I do?

  • My job is to develop faster, more accurate, more efficient algorithms.

  • The areas I worked on: Quantum Chemistry, Machine Learning, and Finance.

  • I am currently working on 3D-modeling, Algorithms, and Data Science.

How do I go about doing this work?

  • The answer to this question is, I use whatever it takes. Usually, by using mathematical/statistical methods.

  • In the past I've used asymptotic analysis to derive more efficient algorithms

  • I also use other methods in Partial Differential Equations like spectral methods and Green's functions.

  • Many numerical tests have to be done by computer either using Matlab, Fortran, Python, or C++.

  • I consider several different structures of a Neural Networks to solve problems. The structures are based on physically accurate requirements or analytically derived conditions.

  • I am also interested in applying Stochastic Calculus based methods to better deal with noisy data when it comes to discovering dynamics from/of data. This has applications in Finance.

Asymptotic analysis for the Linear Schrodinger equation:

Here we study the solutions of the linear Schrodinger equation in the semiclassical regime. We formulate the Bloch-based frozen Gaussian approximation. This highly efficient asymptotic solution is suitable for systems with periodic potentials, such as crystalline solids.

Real part of the first 4 Bloch eigenfunctions for the potential

extended periodically on [-0.5,0.5].

Asymptotic analysis for the Nonlinear Schrodinger equation:

Here we study the solutions of the cubic nonlinear Schrodinger equation in the semiclassical regime. We formulate a time splitting scheme based on the frozen Gaussian approximation. We also present artificial boundary conditions for our algorithm to deal with reflection of the waves at the boundaries.

The figure on the left is a typical solution for the NLS when epsilon is small (0.0625 here). The solution becomes highly oscillatory for small epsilon. In this situation, the frozen Gaussian approximation better approximates the exact solution.

Time dependent density functional theory (Kohn-Sham formalism):

The time dependent Kohn-Sham (TDKS) equations are the most widely used equations for modeling atomic systems. Due to the density dependence, it is similar to the cubic NLS, but slightly more complicated due to the Hartree and Exchange potentials. The equations presented here use the Born-Oppenheimer approximation (No nuclear motion).

The figure shows the electron density for a collection of 30 silver atoms arranged in an FCC configuration. This was  computed by using a Gaussian beam based algorithm for the Time-dependent Kohn-Sham equations.

Frozen Gaussian Approximation for Time dependent density functional theory:

We can obtain a much more stable algorithm for the TDKS by using Gaussian functions of fixed width. This is known as the Frozen Gaussian approximation. For the linear Schrodinger equation (in 3-dimensions with potential U(x)) it is of the form:

The above figures show a time dependent numerical simulation using the FGA integral operator. On the left is a system of 57 (magic number) atoms. On the right we compare our FGA algorithm with the Crank Nicholson (CN) Scheme. The FGA-based algorithm has spectral accuracy and is much  faster than CN and other conventional (non-asymptotic) schemes.

Applications of Deep Learning: Discovering Partial differential equations and improving algorithms used in scientific computing.

We develop a data fitting algorithm based on the Implicit-Explicit Runge-Kutta (IMEX) scheme with physics aware structure. This tool was developed to deal with kinetic equations for which:

​

 

 

 

 

 

 

 

 

 

​

​

is an example of the type of equations. Given that our algorithm is IMEX-based, it is natural for it to deal with the multiscale structure of kinetic equations. Our algorithm also has a Recurring Neural Net structure whose purpose is to determine  nonlinearities in data.

wix_ke_eqn.bmp

Deep Learning

Deep Learning has drawn a lot of interest in the recent years. I am interested in using deep learning for scientific computing. Neural Networks can be great at approximating functions. For many smooth functions a recursive sequence of affine transformations followed by a nonlinear activation function (Using the so called Feed-Forward network) is enough to capture its features. However, this type of neural-network still cannot solve many problems either due to structure or lack of computational power. I am currently interested in developing more efficient structures to help solve scientific problems in a more efficient way.

​

A note to the reader: I have a Github account (Ricard0000 (Ricardo Delgadillo (Ph.D.)) (github.com)), but note that the code posted there is not yet production level code. I tend to spend more time discovering algorithms rather than making them production value.

Discovering Dynamics from data

In work (5) [Homepage], we produce a multiscale algorithm for discovering dynamics from data. We accurately fit multiscale data to a first order in time PDE. Highlights include physics aware loss functions and training efficiency in addition to accurate predictions.

Future work1:

For a large class of PDE's, Green's functions provide a way to describe analytical solutions and thus, an alternate approach for algorithm development, for example: Split-Step methods. Discovering dynamics satisfied by data, using a Green's function approach can be efficient in many ways. I provide an example code in my Github page. Contact me if you are interested about the details of this project (use rdelgadillo0000@gmail.com)

Future work2:

In work (4) [Homepage], we derive a green's function using perturbation theory. The work in (4)

presents a very elegant mathematical derivation of using asymptotic analysis to improve TDDFT algorithms. But beyond this improvement, it is too difficult to use higher order perturbation theory to derive a new algorithm. I am investigating improving this algorithm using Neural-Nets. One trouble with TDDFT is that one needs to use a very fine time-step to simulate atoms and molecules over a long time interval. The goal is to derive an ML algorithm that can increase the time-step for TDDFT algorithms.

bottom of page