Loading [MathJax]/jax/output/CommonHTML/jax.js
HOME RESEARCH

Research Interests

Functional analysis, operator algebras, multivariable operator theory, cohomology

Current Interests

My research is in functional analysis and operator algebras. I study the representation theory of certain non-selfadjoint operator algebras, called (operator) tensor algebras, through the use of homological methods. The objects of study, operator algebras, are algebras of continuous, linear transformations on Hilbert space. Motivated by problems in group representation theory, quantum mechanics, logic and geometry, the subject originated in the pioneering works of Murray and von Neumann in the 1930s. Since that time, the subject has grown into one of the major themes in mathematics, with applications in a host of new subjects, including knot theory and topology, foiliation theory and noncommutative geometry, and engineering control theory, to name a few.

My work focuses on the problem of "putting together two representations (or modules) to build a third". In short, I study isomorphism classes of extensions of one Hilbert module by another, using homological methods that can be traced back to the pioneering work of B. Johnson, who was the first to consider systematically a homological algebra theory in functional analysis.

Here is a video featuring the operator algebraist Dr. Takeshi Katsura. This is the best explanation of operator algebra to a general audience that I have seen on youtube. I hope you will enjoy.

Projects

Completely Contractive Extensions of Hilbert Modules over Tensor Algebras [pdf]

This paper studies completely contractive extensions of Hilbert modules over tensor algebras over C-correspondences. Using a result of Sz-Nagy and Foias on triangular contractions, extensions are parameterized in terms of contractive intertwining maps between certain defect spaces. These maps have a simple description when initial data consists of partial isometries. Sufficient conditions for the vanishing and nonvanishing of completely contractive Hilbert module Ext are given that parallel results for the classical disc algebra.


Matrix Differential Equations

Joint work with undergraduate students.
A derivation is a linear map on an algebra satisfying the product rule. Derivations can be used to define differential operators and, therefore, differential equations. Starting with a noncommutative algebra, we observe that its theory of differential equations can behave remarkably different than what may be expected from classical differential equations. To date, we have investigated the following:

  • Dimension of the solution space for homogeneous DEs over full matrix algebras (MDEs). The order of the MDE no longer agrees with the solution dimension.
  • Variation of parameters over MDEs. Much of the method is algebraic and works the same. There is even a Wronskian matrix, but due to the previous item it is a rectangular matrix instead of a square matrix. Intuitively, the roles of linear independence and the spanning property for a fundamental set of solutions become more pronounced in the mathematical arguments using non-square Wronskians that what is seen in classical DEs.
  • MDEs over subalgebras of lower triangular matrices. Due to easier multiplication, the theory of MDEs appears to be easier over lower triangular matrices.
  • Differential Equations over path algebras. The previous example is a special case of a path algebra.

Solution Dimensions of Matrix Differential Equations with K. Garcia and D. Persaud [pdf]

This paper studies the possible dimensions of solution spaces for first-order matrix differential equations over M2(C). MDEs are purely algegbraic, noncommutative analogues of classidcal ordinary differential equations in which functions are replaced by matrices and differentiation is replaced by a derivation. An elementary proof is provided that shows all derivations on M2(C) are inner. A coefficient matrix is derived that encodes key features of the MDE. In particular, Gaussian elimination is used to determine which solution dimensions are possible and impossible. However, the coefficient matrix has variable entries, so a game-like, case-by-case analysis is carried out. An eigenvalue approach is also offered as an alternative proof.


Cohomology of Hilbert Modules over Graph Tensor Algebras

Determine the first cohomology group for Hilbert modules over tensor algebras constructed from directed graphs. Interpret this group in graph-theoretic terms.

This cohomology group can be defined in terms of derivations (linear transformations satisfying the product rule.) To make the subject more accessible to undergraduate students, I created the following visualizations of the product rule on graphs. Interestingly, the first animation is related to derivations on the algebra of lower triangular 3x3 matrices and is related to the MDE project described above.

As differential operators, derivations can be used to differential equations on directed graphs via their associated graph algebras. Surprisingly, many of the methods taught to undergraduate students in differential equations courses still make sense in noncommutative algebra settings. Here are some slides [pdf] from a talk I gave on variation of parameters in the MDE setting. Although the slides focus on differential equations on Mn(R), the results apply to more general differential algebras including directed graph algebras. As a functional analyst, I am particular interested in operator norm completions of the graph algebras studied in pure algebra. In this setting, convergent Taylor series of graph cycles allow us to define exponentials which have lovely properties with respect to differentiation.

Visualizations of Derivations on Graphs

A3 example:
The directed graph in this example is simply a directed chain with three vertices. The animation below is a graph-theoretic visualization and algebraic generalization of the familiar product rule from calculus. Recall that product rule ddx(αβ)=(ddxα)β+α(ddxβ). To put this in the graph algebra context, suppose αβ corresponds to a path that is the concatenation of two edges α and β and that α starts where β ends. Note: I am using the right-to-left convention for composition of paths. Graphically, a derivation Δ involves two copies of the original graph. In the Desmos applet below, you see two copies of A3, a blue copy and a red copy, with each copy corresponding to two separate (Hilbert module) representations. Each copy consists of two (horizontal) edges. The left edges correspond to two copies of α, a blue (top) and red (bottom) copy. The two right edges correspond to two copies of β, a blue (top) copy and a red (bottom) copy. Thus, there are also two copies of the path αβ, a blue (top) copy and a red (bottom) copy. The orange edges correspond to the derivation Δ being applied to α (the left orange edge) and Δ applied to β (the right orange edge.) In this way, we see that we can visualize Δ(αβ) as a sum of paths that start in the blue copy and terminate in the red copy.

Δ(αβ)=Δ(α)ρ(β)+π(α)Δ(β)=DαTβ+SαDβ

The blue edges correspond to the T maps in the formula above and the red edges correspond to the S maps. The D maps that determine the derivation Δ correspond to the orange edges that link the two representations. When you press play, you will see an animation of two paths. The top path is a visualizaton of DαTβ and the bottom path animates SαDβ.

Intuitively, I like to imagine the two copies of the graph being separate universes. The derivative of αβ is a sum of the ways, starting in the blue universe, we can traverse the path αβ. At each edge of the path, we have the option to transfer to the alternate red universe and carry out the remainder of the path in this alternate universe.

Just press play next to the t variable. Technically, the animation is completed once the two paths meet up at the bottom right vertex, but the default behavior in Desmos is to replay the animation in reverse. The rewind is not part of the mathematical visualization.

powered by
Drop Image Here
t=0
« 1x »
0
4
t step:
1
M=1853
« 1x »
-10
10
M step:
2
(M4,M),(M2,M),(M,M)
« »
3
T={M4<x<M:M}
« »
4
(M4,M2),(M2,M2),(M,M2)
« »
5
S={M4<x<M:M2}
« »
6
D2=M(x(M4)){M4xM2}
« »
7
D1=M(x(M2)){M2xM}
« »
8
P1={M4xmin(M4+t,M2):T, M2xmax(M4+t,M2):D1}
« »
9
P2={M4xmin(M4+t,M2):D2, M2xmax(M4+t,M2):S}
« »
10
11
powered by




k[x] example:

We now consider the directed graph with one vertex and one loop γ. The path γN corresponds to the loop N times.

Δ(γ3)=Δ(γ)ρ(γ2)+π(γ)Δ(γ)ρ(γ)+π(γ2)Δ(γ)=DT2+SDT+S2D

The animation below is a visualization of the familiar power rule. Specifically, it shows the derivative of γ3. Again, we have two copies of the graph corresponding to two different representations: ρ (the left copy) and π (the right copy). The derivation Δ is determined by the map D mapping between the copies. In the special case that S=T and D=I, we get the expression 3T2 which is reminiscent of ddxx3=3x2. In general, any path from the left vertex to the right vertex will start with some number, say N, of loops around the left loop, then move to the right vertex, and finish with some number, say M, of loops about the right loop. Algebraically, such a path is denoted SMDTN. Note: N or M could be zero as as we will see in the next animation. Below, the motion of the red particle corresponds to the term S2D, the blue particle is moving according to SDT, and the orange particle follows DT2. These three paths add together to yield Δ(γ3).

As before, just press play next to the t variable. Technically, the animation is completed once all the colored dots reach the right vertex (i.e., when the orange travels right.) The default behavior in Desmos is to replay the animation in reverse. The rewind is not part of the mathematical visualization.

powered by
Drop Image Here
t=0
« 1x »
0
3
t step:
1
(x2)2+y2=1
« »
2
(x+2)2+y2=1
« »
3
0{1x1}
« »
4
((0t)(r12)+(t1)(r1+2){0t1} ,0)
« »
5
(r1cos(2πtπ)+2{1t3},r1sin(2πtπ))
« »
6
(r2cos(2πt)2{0t1},r2sin(2πt))
« »
7
((1t)(r22)+(t2)(r2+2){1t2} ,0)
« »
8
(r2cos(2πtπ)+2{2t3},r2sin(2πtπ))
« »
9
(r3cos(2πt)2{0t2},r3sin(2πt))
« »
10
((2t)(r32)+(t3)(r3+2){2t3} ,0)
« »
11
r1=1.15
« 1x »
-10
10
r1 step:
12
r2=1
« 1x »
-10
10
r2 step:
13
r3=.85
« 1x »
-10
10
r3 step:
14
15
powered by