Functional analysis, operator algebras, multivariable operator theory, cohomology
My research is in functional analysis and operator algebras. I study the representation theory of certain non-selfadjoint operator algebras, called (operator) tensor algebras, through the use of homological methods. The objects of study, operator algebras, are algebras of continuous, linear transformations on Hilbert space. Motivated by problems in group representation theory, quantum mechanics, logic and geometry, the subject originated in the pioneering works of Murray and von Neumann in the 1930s. Since that time, the subject has grown into one of the major themes in mathematics, with applications in a host of new subjects, including knot theory and topology, foiliation theory and noncommutative geometry, and engineering control theory, to name a few.
My work focuses on the problem of "putting together two representations (or modules) to build a third". In short, I study isomorphism classes of extensions of one Hilbert module by another, using homological methods that can be traced back to the pioneering work of B. Johnson, who was the first to consider systematically a homological algebra theory in functional analysis.
Here is a video featuring the operator algebraist Dr. Takeshi Katsura. This is the best explanation of operator algebra to a general audience that I have seen on youtube. I hope you will enjoy.
This paper studies completely contractive extensions of Hilbert modules over tensor algebras over C∗-correspondences. Using a result of Sz-Nagy and Foias on triangular contractions, extensions are parameterized in terms of contractive intertwining maps between certain defect spaces. These maps have a simple description when initial data consists of partial isometries. Sufficient conditions for the vanishing and nonvanishing of completely contractive Hilbert module Ext are given that parallel results for the classical disc algebra.
Joint work with undergraduate students.
A derivation is a linear map on an algebra satisfying the product rule. Derivations can be used to define differential operators and, therefore, differential equations. Starting with a noncommutative algebra, we observe that its theory of differential equations can behave remarkably different than what may be expected from classical differential equations. To date, we have investigated the following:
This paper studies the possible dimensions of solution spaces for first-order matrix differential equations over M2(C). MDEs are purely algegbraic, noncommutative analogues of classidcal ordinary differential equations in which functions are replaced by matrices and differentiation is replaced by a derivation. An elementary proof is provided that shows all derivations on M2(C) are inner. A coefficient matrix is derived that encodes key features of the MDE. In particular, Gaussian elimination is used to determine which solution dimensions are possible and impossible. However, the coefficient matrix has variable entries, so a game-like, case-by-case analysis is carried out. An eigenvalue approach is also offered as an alternative proof.
Determine the first cohomology group for Hilbert modules over tensor algebras constructed from directed graphs. Interpret this group in graph-theoretic terms.
This cohomology group can be defined in terms of derivations (linear transformations satisfying the product rule.) To make the subject more accessible to undergraduate students, I created the following visualizations of the product rule on graphs. Interestingly, the first animation is related to derivations on the algebra of lower triangular 3x3 matrices and is related to the MDE project described above.
As differential operators, derivations can be used to differential equations on directed graphs via their associated graph algebras. Surprisingly, many of the methods taught to undergraduate students in differential equations courses still make sense in noncommutative algebra settings. Here are some slides [pdf] from a talk I gave on variation of parameters in the MDE setting. Although the slides focus on differential equations on Mn(R), the results apply to more general differential algebras including directed graph algebras. As a functional analyst, I am particular interested in operator norm completions of the graph algebras studied in pure algebra. In this setting, convergent Taylor series of graph cycles allow us to define exponentials which have lovely properties with respect to differentiation.
Intuitively, I like to imagine the two copies of the graph being separate universes. The derivative of αβ is a sum of the ways, starting in the blue universe, we can traverse the path αβ. At each edge of the path, we have the option to transfer to the alternate red universe and carry out the remainder of the path in this alternate universe.
Just press play next to the t variable. Technically, the animation is completed once the two paths meet up at the bottom right vertex, but the default behavior in Desmos is to replay the animation in reverse. The rewind is not part of the mathematical visualization.
trig | stats | misc |
trig | inverse | hyperb |
sin | arcsin | sinh |
cos | arccos | cosh |
tan | arctan | tanh |
csc | arccsc | csch |
sec | arcsec | sech |
cot | arccot | coth |
total | length | mean |
median | min | max |
quantile | stdev | stdevp |
var | cov | corr |
mad | nCr | nPr |
n! | ~ |
lcm | gcd | mod |
ceil | floor | round |
abs | sign | 3√ |
exp | ln | log |
loga | d/dx | ∑ |
∏ |
≤ | t | ≤ | step: |
≤ | M | ≤ | step: |
We now consider the directed graph with one vertex and one loop γ. The path γN corresponds to the loop N times.
The animation below is a visualization of the familiar power rule. Specifically, it shows the derivative of γ3. Again, we have two copies of the graph corresponding to two different representations: ρ (the left copy) and π (the right copy). The derivation Δ is determined by the map D mapping between the copies. In the special case that S=T and D=I, we get the expression 3T2 which is reminiscent of ddxx3=3x2. In general, any path from the left vertex to the right vertex will start with some number, say N, of loops around the left loop, then move to the right vertex, and finish with some number, say M, of loops about the right loop. Algebraically, such a path is denoted SMDTN. Note: N or M could be zero as as we will see in the next animation. Below, the motion of the red particle corresponds to the term S2D, the blue particle is moving according to SDT, and the orange particle follows DT2. These three paths add together to yield Δ(γ3).
As before, just press play next to the t variable. Technically, the animation is completed once all the colored dots reach the right vertex (i.e., when the orange travels right.) The default behavior in Desmos is to replay the animation in reverse. The rewind is not part of the mathematical visualization.
trig | stats | misc |
trig | inverse | hyperb |
sin | arcsin | sinh |
cos | arccos | cosh |
tan | arctan | tanh |
csc | arccsc | csch |
sec | arcsec | sech |
cot | arccot | coth |
total | length | mean |
median | min | max |
quantile | stdev | stdevp |
var | cov | corr |
mad | nCr | nPr |
n! | ~ |
lcm | gcd | mod |
ceil | floor | round |
abs | sign | 3√ |
exp | ln | log |
loga | d/dx | ∑ |
∏ |
≤ | t | ≤ | step: |
≤ | r1 | ≤ | step: |
≤ | r2 | ≤ | step: |
≤ | r3 | ≤ | step: |