Analysis of the Hodge Laplacian on the Heisenberg group

We consider the Hodge Laplacian $\Delta$ on the Heisenberg group $H_n$, endowed with a left-invariant and U(n)-invariant Riemannian metric. For $0\le k\le 2n+1$, let $\Delta_k$ denote the Hodge Laplacian restricted to $k$-forms. Our first main result shows that $L^2\Lambda^k(H_n)$ decomposes into finitely many mutually orthogonal subspaces $\V_\nu$ with the properties: {itemize} $\dom \Delta_k$ splits along the $\V_\nu$'s as $\sum_\nu(\dom\Delta_k\cap \V_\nu)$; $\Delta_k:(\dom\Delta_k\cap \V_\nu)\longrightarrow \V_\nu$ for every $\nu$; for each $\nu$, there is a Hilbert space $\cH_\nu$ of $L^2$-sections of a U(n)-homogeneous vector bundle over $H_n$ such that the restriction of $\Delta_k$ to $\V_\nu$ is unitarily equivalent to an explicit scalar operator. {itemize} Next, we consider $L^p\Lambda^k$, $1<p<\infty$, and prove that the same kind of decomposition holds true. More precisely we show that: {itemize} the Riesz transforms $d\Delta_k^{-\half}$ are $L^p$-bounded; the orthogonal projection onto $\cV_\nu$ extends from $(L^2\cap L^p)\Lambda^k$ to a bounded operator from $L^p\Lambda^k$ to the the $L^p$-closure $\cV_

Introduction 2 1. Differential forms and the Hodge Laplacian on H n 7 2. Bargmann representations and sections of homogeneous bundles 10 3. Cores, domains and self-adjoint extensions 13 4. First properties of ∆ k ; exact and closed forms 20 5. A decomposition of L 2 Λ k H related to the ∂ and∂ complexes 26 5.1. The subspaces 27 5.2. The action of ∆ k 34 5.3. Lifting by Φ 36 6. Intertwining operators and different scalar forms for ∆ k 37 6.1. The case of V p,q

Introduction
The theory of the Hodge Laplacian ∆ on a complete Riemannian manifold M shows deep connections between geometry, topology and analysis on M . While this theory is well developed in the case of functions, i.e., for the Laplace-Beltrami operator, much less is known for forms of higher degree on a non-compact manifold. In particular, one basic question that one would like to answer is whether the Riesz transform d∆ − 1 2 is L p -bounded in the range 1 < p < ∞. According to [St], this property is relevant for establishing the Hodge decomposition in L p for differential forms, cf. [ACDH, Li, Loh] and references therein.
In a similar way, functional calculus on self-adjoint, left-invariant Laplacians and sublaplacians L on Lie groups and more general manifolds has been widely studied, cf. [A, An, AnLoh, Ch, ChM, ClS, CoKSi, He, HeZ, Hu, Mar, LuM, LuMS, MauMe, MS, Si, Ta]. A key question concerns the possibility that, for a given L, a Mihlin-Hörmander condition of finite order on the multiplier m(λ) implies that the operator m(L) is bounded on L p for 1 < p < ∞. A second fundamental question is the L p -boundedness, in the same range of p, of the Riesz trasforms XL − 1 2 for appropriate left-invariant vector fields X [CD,CMZ,GSj,Loh2,LohMu]. Also in these situations, not much is known for operators which act on sections of some homogeneous linear bundle over a given group. The most notable case is that of sublaplacians associated to the∂ b -complex on homogeneous CR-manifold [CoKSi, FS] In this paper we consider the Hodge Laplacian ∆ on the Heisenberg group H n , endowed with a left-invariant and U (n)-invariant Riemannian metric, and give answers to the above questions.
The rich structure of the Heisenberg group makes it a natural model to explore such questions in detail. First of all, it has a natural CR-structure, with a well-understood Kohn Laplacian [FS], and nice interactions with the Riemannian structure [MPR1].
For operators on H n which act on scalar-valued functions and are left-and U (n)-invariant, the methods of Fourier analysis are quite handy to study spectral resolution, and sharp multiplier theorems for differential operators of this kind are known [MRS1,MRS2]. This class of operators is based on two commuting differential operators, namely the sublaplacian L and the central derivative T , in the following sense: -the left-and U (n)-invariant differential operators on H n are the polynomials in L and T ; -the left-and U (n)-invariant self-adjoint operators on L 2 (H n ) containing the Schwartz space in their domain are the operators m(L, i −1 T ), with m a real spectral multiplier.
The same methods also allow to study operators acting on differential forms, like the Kohn Laplacian, which have the property of acting componentwise with respect to a canonical basis of left-invariant forms, cf. (1.4).
On the other hand, the Hodge Laplacian restricted to k-forms, which we denote by ∆ k and whose explicit expression is given in (1.22) below, is far from acting componentwise.
Nevertheless, we are able to reduce the spectral analysis of ∆ k to that of a finite family of explicit scalar operators. We call scalar an operator on some space of differential forms which can be expressed as D ⊗ I, i.e., which acts separately on each scalar component of a given form by the same operator D.
We do so by introducing a decomposition of L 2 Λ k (H n ) into finitely many mutually orthogonal subspaces V ν with the following properties: (i) dom ∆ k splits along the V ν 's as ν (dom ∆ k ∩ V ν ); (ii) ∆ k : (dom ∆ k ∩ V ν ) −→ V ν for every ν; (iii) for each ν, there is a Hilbert space H ν of L 2 -sections of a U (n)-homogeneous vector bundle over H n such that the restriction of ∆ k to V ν is unitarily equivalent to a scalar operator m ν (L, i −1 T ) acting componentwise on H ν ; (iv) there exist unitary operators U ν : H ν −→ V ν intertwining m ν (L, i −1 T ) and ∆ k which are either bounded multiplier operators u ν (L, i −1 T ), or compositions of such operators with the Riesz transforms This is done in the first part of the paper (Sections 3-8), and we refer to this part as to the "L 2 -theory". The main results in this context are Theorems 8.1 and 8.6, resp., where we obtain the decomposition of L 2 Λ k into the ∆ k -invariant subspaces V ν , respectively for 0 ≤ k ≤ n and n + 1 ≤ k ≤ 2n + 1.
This decomposition is fundamental for all the second part of the paper, which we are going to describe next. A quick description of the logic and the basic ideas in the construction of the V ν is postponed to the last part of this introduction.
In Sections 9-12, we develop the "L p -theory" 1 . We prove that, for 1 < p < ∞, the same kind of decomposition also takes place in L p Λ k . Precisely: (a) the intertwining operators U ν in (iv) have L p -bounded extensions; (b) consequently, the orthogonal projections U * ν U ν from L 2 Λ k to V ν extend to bounded operators from L p Λ k to the the L p -closure V p ν of V ν ∩ L p Λ k ; (c) the Riesz transforms R k = d∆ − 1 2 k are L p -bounded; (d) the L p -strong Hodge decomposition holds true for k = 0, . . . , 2n + 1, and more precisely L p Λ k is direct sums of the subspaces V p ν 's. The (much simpler) case of 1-forms was already considered in [MPR1]. We expect that our results can be applied to the study on conformal invariants on quotients of H n , along the lines of [MPR2], cf. [Lot, Lü].
In our last main result, as consequence of the L p -theory we develop, we prove a Mihlin-Hörmander multiplier theorem for ∆ k , for all k = 0, . . . , 2n + 1. We show that, if m : R → C is a bounded, continuous function satisfying a Mihlin-Hörmander condition of order ρ > (2n + 1)/2, then, for 1 < p < ∞, the operator m(∆ k ) is bounded on L p (H n )Λ k , with norm bounded by the appropriate norm of m (cf. Theorem 11.1).
We briefly comment on some interesting aspects of the proof and on some consequences and applications. It is always assumed that 1 < p < ∞.
• Our inductive strategy requires that two statements be proved simultaneously at each step: property (a) above for the given k and L p -boundedness of the Riesz transform R k = d∆ −1/2 k . Precisely, the validity of (a) for a given k implies L p -boundedness of R k , and this, in turn, is required to prove (a) for k + 1.
• In order to handle the complicated expressions of the intertwining operators U ν , we identify certain symbol classes, denoted by Ψ ρ,σ τ , which satisfy simple composition properties, contain all the scalar components of the U ν , and, when bounded, give L p -bounded operators (cf. Subsection 9.2).
• Taking as the initial definition of "exact L p -form" a form ω which is the L p -limit of a sequence of exact test forms (cf. Proposition 4.5 for p = 2), we prove in Subsection 11.2 that this condition is equivalent to saying that ω is in L p and a differential in the sense of distributions. Incidentally, this allows to prove that the reduced L p -cohomology of H n is trivial for every k. • The Mihlin-Hörmander theorem for spectral multipliers of ∆ k , proved in [MPR1] for k = 1, extends to every k. • Our analysis of ∆ easily yields analogous results for the Dirac operator d + d * . Studying the Hodge laplacian first has the advantage of isolating one order of forms at a time. Corollary 11.5 is a multiplier theorem for the Dirac operator completely analogous to Theorem 11.1.
Outline of the decomposition of L 2 Λ k .
1 There is an unfortunate notational conflict, due to the fact that the letter p is the commonly used symbol for both Lebesgue spaces and bi-degrees of forms. In this introduction and in the titles of sections we keep the notation L p , while in the body of the paper we will denote by L r the generic Lebesgue space.
We go back now to the construction of the subspaces V ν of L 2 Λ k (H n ).
First of all, by Hodge duality, we may restrict ourselves to form of degree k ≤ n. We start from the primary decomposition into exact and d * -closed forms: where each summand is ∆ k -invariant.
Since the Riesz transform R k−1 = d∆ − 1 2 k−1 commutes with ∆ and transforms (L 2 Λ k−1 ) d * -cl onto (L 2 Λ k ) d-ex unitarily, any ∆-invariant subspace V ν of (L 2 Λ k−1 ) d * -cl has a twin ∆-invariant The analysis is so reduced to the space of d * -closed forms. Associated with the CR-structure of H n , there is a natural notion of horizontal (p, q)-form as a section of the bundle Λ p,q = Λ p,q T * C H n , and of horizontal k-form as a section of Λ k H = ⊕ p+q=k Λ p,q . Every differential form ω decomposes uniquely as where ω 1 , ω 2 are horizontal and θ is the contact form. Moreover, a d * -closed form ω is uniquely determined by its horizontal component ω 1 .
From now on it is very convenient to introduce a special "test space" S 0 , contained in the Schwartz space, together with its corresponding spaces of forms, S 0 Λ k , S 0 Λ p,q etc., which are cores for ∆ k and the other self-adjoint operators that will appear.
For forms in the core, we have enough flexibility to perform all the required operations in a rather formal way, leaving the extensions to L 2 -closures for the very end. For instance, we can say that to every horizontal form ω 1 in the core we can associate a "vertical component" θ ∧ ω 2 , also in the core, to form a d * -closed form ω 1 + θ ∧ ω 2 in the core.
Setting Φ(ω 1 ) = ω 1 + θ ∧ ω 2 , we can replace ∆ k by the conjugated (but no longer differential) operator D k = Φ −1 • ∆ k • Φ, which acts now on the space of horizontal k-forms in the core and globally defined.
Here comes into play another invariance property of ∆ k , which is easily read as a property of D k and involves the horizontal symplectic form dθ. The following identity holds (cf. Lemma 5.11) for a horizontal form ω of degree k: This brings in the Lefschetz decomposition of the space of horizontal forms, as adapted in [MPR1] from the classical context of Kähler manifolds [W]. Denoting by e(dθ) the operator of exterior multiplication by dθ and by i(dθ) its adjoint, it is then natural to think of the core S 0 Λ p,q in the space of horizontal (p, q)-forms as the direct sum S 0 Λ p,q = min{p,q} j=0 e(dθ) j ker i(dθ) .
Here each summand is D k -invariant, and the conjugation formula (0.1) allows us to focus our attention on ker i(dθ).
Nevertheless, ker i(dθ) still is too big a space to allow a reduction of D k to scalar operators. It is however easy to identify, for each pair (p, q) with p + q = k, a proper D k -invariant subspace of ker i(dθ) ∩ S 0 Λ p,q , namely W p,q 0 = {ω ∈ S 0 Λ p,q : ∂ * ω =∂ * ω = 0} . It turns out that D k acts as a scalar operator on W p,q 0 , so that the L 2 -closure V p,q 0 of Φ(W p,q 0 will be one of the spaces V ν we are looking for 2 Next, we take to the orthogonal complement of and we can telescopically expand this splitting to obtain that The subspaces W p,q 1 = ∂ξ +∂η : ξ, η ∈ W p,q 0 (p + q = k − 1) W p,q 2 = ∂ ∂ξ + ∂∂η : ξ, η ∈ W p,q 0 (p + q = k − 2) etc. generated in this way are D k -invariant and mutually orthogonal.
Matters are simplified by the fact that, for j ≥ 1, So only W p,q 0 , W p,q 1 and part of W p,q 2 are contained in ker i(dθ). Setting we obtain that On each W p,q 0 and W p,q 2 , D k acts as a scalar operator, as required.
The situation is not so simple with W p,q 1 , because the best one can obtain is a representation of D k as a 2 × 2 matrix of scalar operators, after parametrizing the elements of W p,q 1 with pairs (ξ, η) of forms in W p,q 0 : A formal computation can be used on the core to produce "eigenvalues" λ ± (L, i −1 T ) and the splitting of W p,q 1 as the sum of the two "eigenspaces" W p,q,± 1 . The final decomposition is in formula (10.2).

Differential forms and the Hodge Laplacian on H n
The Heisenberg group H n is C n × R with product On its Lie algebra, also identified with C n × R, we introduce the standard Euclidean inner product, and we consider the left-invariant Riemannian metric on H n induced by it. The complex vector fields form an orthonormal basis of the complexified tangent space at each point, and the only nontrivial commutators involving the basis elements are The dual basis of complex 1-forms is The differential of a function f is therefore This formula extends to forms, once we observe that dζ j = dζ j = 0 and the differential of the contact form θ is the symplectic form on C n , Every form ω decomposes as with ω 1 , ω 2 horizontal. A differential operator D acting on scalar-valued functions is extended to forms by letting D act separately on each scalar component (1.6) of each horizontal component (1.7). Such operators will be called scalar operators.
Together with ∂ and∂, they satisfy the following identities: Other formulas involving ∂,∂, e(dθ) and their adjoints are For these formulas and the following in this section, we refer to [MPR1]. 3 3 Perhaps we should add a few more formulas, while moving them into a section, later in the paper.
We define the holomorphic, antiholomorphic and horizontal Laplacians as Each of these Laplacians acts componentwise. Calling (p, q)-form a horizontal form of type and introducing the sublaplacian the operators , , ∆ H coincide on (p, q)-forms with the following scalar operators: (1.19) To be more explicit, we shall occasionally denote the "box"' operators by p and q . Some commutation relations that we will use are (see [MPR1]) The full differential d of a form ω = ω 1 + θ ∧ ω 2 and its adjoint d * are represented, in terms of the pair (ω 1 , ω 2 ), by the matrices When ∆ acts on k-forms, it will be denoted by ∆ k . In particular, We denote by Λ k the k-th exterior product of the dual h * n of the Lie algebra of H n (identified with the linear span of ζ 1 , . . . , ζ n ,ζ 1 , . . . ,ζ n , θ), by Λ k H the k-th exterior product of the horizontal distribution (i.e. the linear span of the ζ j ,ζ j ), and by Λ p,q the space of elements of bidegree (p, q) in Λ k H . Symbols like L p Λ k , SΛ p,q etc., denote the space of L p -sections, S-sections etc., of the corresponding bundle over H n . Clearly, L p Λ k ∼ = L p ⊗ Λ k etc..

Bargmann representations and sections of homogeneous bundles
The L 2 -Fourier analysis on the Heisenberg group involves the family of infinite dimensional irreducible unitary representations {π λ } λ =0 such that π λ (0, t) = e iλt I. These representations are most conveniently realized for our purposes in a modified version of the Bargmann form [F].
The family of Bargmann representations π λ on F is defined, for λ = 0, as follows: (i) for λ = 1, The unitary group U (n) acts on H n through the automorphisms (z, t) −→ (z, t) g = (gz, t) , g ∈ U (n) , and on L 2 (H n ) through the representation We also consider the pair of contragradient representations U,Ū of U (n) on F, given by Then The representation U in (2.4) splits into irreducibles according to the decomposition of F where P j denotes the space of homogeneous polynomials of degree j.
We denote by P j the orthogonal projection of F on P j , and by F ∞ the space of functions F ∈ F such that Then F ∞ is the space of C ∞ -vectors for all representations π λ . The differential of π λ is given by 4 π λ (T ) = iλ and We adopt the following definition of π λ (f ): Notice that π λ (f * g) = π λ (g)π λ (f ), but this disadvantage is compensated by a simpler formalism when dealing with forms or more general vector-valued functions.
The Plancherel formula for f ∈ L 2 is Let V be a finite dimensional Hilbert space. Defining π λ (f ) for V -valued functions f by (2.9), we have Suppose now that V is the representation space of a unitary representation ρ of U (n), and consider the two representations U ⊗ ρ,Ū ⊗ ρ of U (n) on F ⊗ V . Denote by Σ + = Σ ρ,+ (resp. Σ − = Σ ρ,− ) the set of irreducible representations σ ∈ U (n) contained in U ⊗ ρ (resp. inŪ ⊗ ρ), and let (2.10) be the corresponding orthogonal decompositions into U (n)-types. When V = C, the decomposition (2.10) reduces to (2.6). To indicate the dependence on ρ, we shall sometime also write In Proof. We only discuss the case of U ⊗ρ. For every j, P j ⊗V is an invariant subspace. Therefore, for σ ∈ U (n), E + σ = j E + σ ∩ (P j ⊗ V ). Let χ j , χ ρ , χ σ be the characters of U | P j , ρ, σ respectively. The multiplicity of σ in P j ⊗ V is then given by which is the multiplicity of U | P j inρ ⊗ σ (withρ denoting the contragredient of ρ). Since this representation is finite dimensional, the multiplicity can be positive only for a finite number of j. It follows that E + σ has finite dimension.
The decomposition of F ⊗ V given above leads to the following form of the Plancherel formula for L 2 V , with P ± σ denoting the orthogonal projection of F ⊗ V onto E ± σ : (2.11) Let ρ ′ be another unitary representation of U (n) on a finite dimensional Hilbert space V ′ . The convolution of integrable functions f with values in V and K with values in L(V, V ′ ) produces a function taking values in V ′ . In the representations π λ , λ = 0, . Letρ (resp.ρ ′ ) be the representation α ⊗ ρ on L 2 V (resp. α ⊗ ρ ′ on L 2 V ′ ) of U (n) and suppose that convolution by K is an equivariant operator, i.e.
By a variant of Schwartz's Kernel Theorem, the convolution operators D with kernels K ∈ S ′ (H n ) ⊗ L(V, V ′ ) are characterized as the continuous operators from S(H n ) ⊗ V to S ′ (H n ) ⊗ V ′ that commute with left translations on H n . Lemma 2.2 applies to operators of this kind, provided that the Fourier transform π λ (K) is well defined for λ = 0. This is surely the case if K has compact support, and in particular for a left-invariant differential operator Df = f * (Dδ 0 ). We then have π λ (Df ) = π λ (Dδ 0 )π λ (f ) = π λ (D)π λ (f ) . We apply these remarks to the differentials and Laplacians introduced in Section 1. With ρ k denoting the representation of U (n) on Λ k induced from its action on H n by automorphisms, and, as before letρ k = α ⊗ ρ k be the tensor product acting on L 2 Λ k . Then d, d * , ∆ k are equivariant operators. The same applies to ∂,∂, d H etc. on the appropriate L 2 -subbundles.
Notice that , and ∆ H have the special property of acting scalarly on (p, q)-forms, by (1.19). Since the sublaplacian L has the property that π λ (L) acts as a scalar multiple of the identity (namely, as |λ|(2m + n)I) on P m ⊂ F, the same is true for the image of , , ∆ H under π λ .
Given ε > 0, by Plancherel's formula it is possible to find δ, R > 0 and N ∈ N such that the L 2 -function g such that π λ (g) = u δ,R (λ) j≤N P j π λ (f ) approximates f in L 2 by less than ε.
We claim that g is in S, hence in S 0 , and this will conclude the proof, by the density of S in L 2 .
By definition, g = f * h, where h is the function with π λ (h) = u δ,R (λ) j≤N P j . By explicit computation of the matrix entries of the representations of H n [Th], the Fourier transform h(z,λ) of h in the t-variable equals where the ψ j are Schwartz functions on C n . Hence h ∈ S(H n ) and so is g. Q.E.D.
We regard S 0 as the inductive limit of the spaces S δ,R,N , each with the topology induced from S.
Obviously, S 0 V = S 0 ⊗ V is contained in SV and dense in L 2 V . Assume, as in Section 2, that V is a finite dimensional Hilbert space on which U (n) acts unitarily by the representation ρ. Taking into account the action of U (n), one can then introduce a different chain of subspaces filling up S 0 V . Given 0 < δ < R, N ∈ N and finite subsets X ± of Σ ± , define S δ,R,X ± V as the space of functions f such that (i') f ∈ S(H n ) ⊗ V ; (ii') π λ (f ) = 0 for |λ| ≤ δ and |λ| ≥ R; (iii') for δ < |λ| < R, P sgn λ σ π λ (f ) = 0 for σ ∈ X sgn λ ; It follows from Lemma 2.1 that finite unions of the S δ,R,X ± V exhaust finite unions of the S δ,R,N V and viceversa.
In order to develop the L 2 -analysis of differentials and Laplacians, we establish some general facts about densely defined operators from L 2 V to L 2 V ′ , wih (V, ρ), (V ′ , ρ ′ ) finite-dimensional representation spaces of U (n). Precisely, we consider operators whose initial domain is S 0 V and which map S 0 V into S 0 V ′ , continuously with respect to the Schwartz topologies. Most of the operators to be considered in this paper will belong to this class.
be a left-invariant linear operator, U (n)-equivariant and continuous with respect to the S 0 -topologies. Then there exists a family of linear operators B λ,σ : E ρ,sgn λ σ −→ E ρ ′ ,sgn λ σ , depending smoothly on λ = 0, such that (ii) Conversely, given any family of linear operators B λ,σ : E ρ,sgn λ σ −→ E ρ ′ ,sgn λ σ , depending smoothly on λ = 0, there is a unique left-invariant operator B : S 0 V −→ S 0 V ′ , U (n)equivariant and continuous with respect to the S 0 -topologies, such that (3.1) holds for every f ∈ S 0 V . We set (iii) The closure of B as an operator from L 2 V to L 2 V ′ has domain dom (B) consisting of those f ∈ L 2 V such that and B is symmetric (equivalently, B λ,σ is symmetric for every λ, σ), then B is essentially self-adjoint.
Moreover, the space S 0 V ∩ dom m(B) is a core for m(B) and the identity holds for f ∈ dom m(B) and g ∈ L 2 V .
Proof. To prove (i), let {Φ ℓ } ℓ∈N be an enumeration of the orthonormal basis of monomials in F.

Define linear operators
Let also {e i } and {e ′ j } be (finite) bases of V and V ′ respectively.
Then B(g ℓ ⊗ e i ) ∈ S 0 V ′ and where the sum in h ranges over a fixed finite set of indices independent of λ. The coefficients where the sum is finite. Hence, by the continuity assumption on B, Introducing the notationf The composition π λ B(g ℓ ⊗ e i ) π λ (f i * g ℓ ) is well defined, because the second factor has the one-dimensional range CΦ ℓ . Therefore the index k in (3.3) can only assume the value ℓ, and The infinite matrix C i,j (λ) = c ℓ,i h,ℓ,j (λ) h,ℓ has only a finite number of nonzero entries, hence it defines a linear operator B λ i,j from the linear span of the Φ ℓ (i.e. the space of polynomials inside F) into itself, by setting It is now easy to prove that for λ ∈ [a, b], B λ is uniquely defined by the identity (3.5), which shows that it does not depend on the choice of the functions g ℓ , and that if we repeat the same argument starting with a larger interval Covering the positive half-line by compact intervals of this type and repeating the same argument on the negative half-line, we find a unique map λ −→ B λ defined for λ = 0 and for which (3.5) holds for every f ∈ S 0 V .
Since B is U (n)-equivariant, a repetition of the proof of Lemma 2.2 shows that B λ maps E ρ,sgn λ σ into E ρ ′ ,sgn λ σ for every σ ∈ Σ ρ,ρ ′ ,sgn λ . It is obvious from the smoothness of the coefficients c ℓ,i h,ℓ,j that the restricted operators B λ,σ depend smoothly on λ.
The proof of (ii) is quite obvious.
To prove (iii), denote byB be the operator on dom (B), defined in (3.2), such that π λ (Bf ) = B λ π λ (f ). It is easy to verify thatB is closed and that S 0 V is dense in dom (B) in the graph norm ofB. SinceB coincides with B on S 0 V ⊂ SV ,B is the closure of B.
To prove (iv), assume that B is symmetric. Then each operator B λ,σ is self-adjoint, hence so is π λ (B). If B ′ is the adjoint of B, taking g in the domain of B ′ and f ∈ S 0 V , we have By the arbitrariness of π λ (f ) subject to conditions (i')-(iii'), we conclude that i.e. g ∈ dom (B), and that π λ (B ′ g) = π λ (B)π λ (g), i.e. B ′ g =Bg. Finally, (v) is proved in a similar way. Q.E.D.
Remark 3.3. As a typical instance of operations that will be done in the sequel, consider an expression like d∆ −1 k . As soon as we find out that the finite-dimensional operators π λ,σ (∆ k ) are invertible and depend smoothly on λ (see the next Section), an operator Ψ satisfying the identity Ψ∆ k = d is automatically defined on S 0 Λ k with values in S 0 Λ k+1 by imposing that π λ (Ψω) = σ π λ,σ (d)π λ,σ (∆ k ) −1 P λ σ π λ (ω) .
Notice that formal identities, like In many instances we will make use of homogeneity properties of operators B as those considered in Lemma 3.2. As before, we assume that (V, ρ), (V ′ , ρ ′ ) are finite dimensional representations of U (n).
We assume that the multiplicative group R + acts on V by means of the linear representation γ : R + → L(V ) and on V ′ by means of the linear representation γ ′ : R + → L(V ′ ) such that the operators γ(r) and γ ′ (r) are self-adjoint, and in such a way that each of these actions commutes with the corresponding action of U (n) (on V given by ρ and on V ′ given by ρ ′ ).
We also denote by δ r the dilating automorphism of H n defined by δ r (z, t) := (r 1/2 z, rt), r > 0 , and let R + act on functions on H n by the representation Then B is said to be homogeneous of degree a if We shall repeatedly use the following lemma, which applies in particular to operators such as ∂,∂, d H etc.
If B is a U (n)-equivariant, left-invariant operator as in Lemma 3.2 (i), homogeneous in the sense of (3.6) for some a ∈ R, then Proof. We just have to verify that the other implication being contained in the assumptions.
(i) If B is a U (n)-equivariant, left-invariant linear operator, homogeneous in the sense of (3.6), and bounded from L 2 V to L 2 V ′ , then B satisfies the assumptions of Lemma 3.2 (i). (ii) Assume that H ⊂ L 2 V is a closed subspace, which is invariant under left-translation by elements of H n , under the action of U (n) and invariant under the dilations (β ⊗ γ)(r), r > 0. Then Proof. As in the proof of Lemma 3.2, let {Φ ℓ } ℓ∈N be an enumeration of the orthonormal basis of monomials in F and set E ℓ, We fix an interval I = [a, b] with 0 < a < b and, for every ℓ ∈ N, a function g ℓ ∈ S 0 such that with c ℓ,i h,k,j ∈ L 2 (I) for every choice of the indices. Then almost every point λ ∈ I is a Lebesgue point for all c ℓ,i h,k,j and for h, where the sum is finite (say over ℓ ≤ N ). The invariance of B under translations by elements (0, t) of the center of H n implies that B preserves the λ-support of the group Fourier transform. Therefore, we also have The same computations in the proof of Lemma 3.2 produce an infinite matrix C i,j (λ) = c ℓ,i h,ℓ,j (λ) h,ℓ with at most N nonzero entries on each row, defined for λ ∈ Λ. Defining B λ by (3.4), we have that for λ ∈ Λ. Now, the homogeneity of B easily implies that, for λ, λ ′ ∈ Λ, This identity allows to extend B λ as a smooth function of λ to every λ > 0. Obviously, the same construction can be made for λ < 0. Then, for every f ∈ S 0 V , the identity (3.10) holds for every λ = 0, which shows that B(S 0 V ) ⊂ S 0 V ′ . Then Lemma 2.2 and the following remarks imply that we are in the hypotheses of Lemma 3.2 (ii).
In order to prove (ii), let us denote by P the orthogonal projection from L 2 V onto H. Since H is invariant under left-translations, U (n)-invariant and dilation invariant, P is a left-invariant operator which is U (n)-equivariant and homogeneous of degree 0. Moreover, by the Schwartz kernel theorem, it is given by the convolution P f = f * K with a tempered distribution kernel K taking values in L(V, V ). We may therefore apply (i) to B := P and conclude by means of Lemma 3.4 that P (S 0 V ) ⊂ S 0 V , and similarly, (

First properties of ∆ k ; exact and closed forms
The domain dom (∆ 0 ), defined according to Lemma 3.2, is the "left-invariant Sobolev space" H 2 consisting of those f ∈ L 2 such that Xf, XY f ∈ L 2 for every X, Y ∈ h n . This follows from the L 2 boundedness of the operators X(1 For k ≥ 1, we have the analogous description of dom (∆ k ).
where P is symmetric on H 2 Λ k . By (1.19), each entry in P involves at most first-order derivatives in the left-invariant vector fields. Therefore, for ω ∈ H 2 Λ k , for every ε > 0. By the Kato-Rellich theorem [Ka], The following statement is an immediate consequence.
We call Riesz transforms the operators The following identities hold (with the convention that R −1 = R 2n+1 = 0): In turn, this gives the first identity of (4.2) on S 0 Λ k . The second identity is proved in the same way.
Since the two summands on the left-hand side of (4.4) are positive operators, they are L 2contractions. Since their sum is the identity and their product is zero by (4.3), they are idempotent. This proves that they are orthogonal projections. It follows that R k and R * k−1 are partial isometries. Q.E.D.
The following statement says in particular that the cohomology groups of the De Rham complex are trivial.
We thus have shown that the spaces (i) -(v) are the same, and the equality of the spaces (i')-(v') are proved in the same way.
In order to prove that the spaces (i) -(v) agree also with the space (vi), we first observe that d(DΛ k−1 ) ⊂ ker d, since d 2 = 0 on DΛ k−1 . We thus have d(DΛ k−1 ) ⊂ R k−1 (L 2 Λ k−1 ). To prove that these spaces are indeed the same, it will suffice to prove that σ ⊥ R k−1 (L 2 Λ k−1 ) whenever σ ∈ L 2 Λ k satisfies σ ⊥ d(DΛ k−1 ). But, the latter condition means that d * σ = 0 in the sense of distributions. So, by Lemma 3.2, σ ∈ dom d * , and since (iv')=(ii'), we see that σ = R * k ξ for some ξ ∈ L 2 Λ k+1 . This implies that for every We have thus seen that the spaces (i) -(vi) all agree, and in a similar way one proves that the spaces (i') -(vi') are all the same.
The proof that these spaces also do agree with the space (vii) respectively (vii') will require deeper L p -methods, and will therefore be postponed to Section 11 (see Corollary 11.3). Q.E.D.
Working out the same program for ∂,∂, their adjoints and the box-operators one encounters some differences. One simplification comes from the fact that and act as scalar operators on horizontal forms of a given bi-degree.
On the other hand, a complication comes from the fact they have a non-trivial null space in L 2 for certain values of p or q. It is well known since [FS] that L + iαT is injective on L 2 if and only if α = ±(n + 2j), j ∈ N, and that it is hypoelliptic under the same restriction. It follows from (1.19) that (resp. ) is injective, and hypoelliptic, on (p, q)-forms provided that p = 0, n (resp. q = 0, n).
For these values of p (resp. q), we shall denote by ′ (resp. ′ ) the unprimed operator with domain and range restricted to the orthogonal complement of the corresponding null space. Notice that the core S 0 Λ p,q splits according to the decompositions ker ∂ We denote by C (resp.C) the orthogonal projection from scalar L 2 onto ker(L + inT ) (resp. ker(L−inT )). The same symbols will be used to denote the extension to forms by componentwise application.
Thus, C is the orthogonal projection onto ker ∂ when acting on (0, q)-forms as well as onto ker ∂ * when acting on (n, q)-forms, andC is the orthogonal projection onto ker∂ when acting on (p, 0)-forms as well as onto ker∂ * when acting on (p, n)-forms.
Regard ∂ as a closed operator from L 2 Λ p,q to L 2 Λ p+1,q . The holomorphic Riesz transforms are defined on S 0 Λ p,q (with values in S 0 Λ p+1,q ) by We observe that, is all cases, The analogues of (4.3) and (4.4) are Proposition 4.5 has the following analogue.
Proposition 4.6. For 0 ≤ p ≤ n − 1, the following subspaces of L 2 Λ p,q are the same: For 1 ≤ p ≤ n, the following subspaces of L 2 Λ p,q are the same: Similarly, for 1 ≤ p ≤ n, the following subspaces of L 2 Λ p,q are the same: and we call this subspace (L 2 Λ p,q ) ∂ * -cl .
For 0 ≤ p ≤ n − 1, the following subspaces of L 2 Λ p,q are the same: We also set (L 2 Λ k H ) ∂-ex = p+q=k (L 2 Λ p,q ) ∂-ex etc. The antiholomorphic Riesz transforms R q and their adjoints R * q are defined by conjugating all terms in (4.5) and (4.7) respectively, and replacing p by q. The analogue of formula (10.3) also holds true for all q The rest goes in perfect analogy with the holomorphic case.
The following statements are obvious in view of Proposition 4.6.
Lemma 4.8. C p is the orthogonal projection of L 2 Λ p,q onto the kernel of ∂, and C p is the orthogonal projection of L 2 Λ p,q onto the kernel of ∂, , and for p = n, C n ω = 0 if and only if ω = 0. Analogous statements hold for the operators C q , if we replace p by q and conjugate all terms. In particular, C 0 = C, C 0 = C, and ∂ * ω = 0 whenever C p ω = 0, and ∂ * ω = 0 whenever C q ω = 0.
Given a horizontal k-form ω = p+q=k ω pq we finally set

5.
A decomposition of L 2 Λ k H related to the ∂ and∂ complexes In this section, we shall work under the assumption that 0 ≤ k ≤ n, as this turns out to be more convenient in view of the Lefschetz decomposition described in Prop. 2.1 of [MPR1]. The case where k > n can be reduced to the case k ≤ n by means of Hodge duality, as will be shown later in Section 8.
Our starting point in the spectral analysis of ∆ k is the decomposition obtained in Proposition 4.5 Since d∆ k−1 = ∆ k d for all k ≥ 1, using the results from [MPR1] for ∆ 1 , we can lift the decomposition of L 2 Λ 1 into ∆ 1 -invariant subspaces and the related spectral properties to (L 2 Λ 2 ) d-ex . Therefore, inductively we analyse the (L 2 Λ k ) d-ex -component in the decomposition of L 2 Λ k by means of the preceeding step.
Thus, we are led to study the ( Hence, if we set Notice that, because of the invariance of θ and the equivariance of d * H , Φ commutes with the action of U (n).
Clearly, ∆ k maps a subspace V of (S 0 Λ k ) d * -cl into itself if and only if the (non-differential, see (5.17) below) operator For this reason, we begin by decomposing S 0 Λ k H into orthogonal subspaces which are invariant under D k and on which D k takes a simple form.

The subspaces.
The decomposition is based on the following lemma.
The term ω ′ is uniquely determined, and we can assume, in addition, that Notice that, even with the extra assumption (5.6), ξ and η are not uniquely determined.
Observe that the decomposition (5.4), without the extra assumptions on ξ and η, can be iterated, so to obtain in a next step that where now each of the primed symbols represents a form satisfying (5.5). If ω is a horizontal k-form, the iteration stops after k steps, leaving no "remainder terms".
We are so led to introduce, for each m ≤ k the spaces of forms It is convenient to observe that in a sequence of at least three alternating ∂'s and∂'s, we can replace a product ∂∂ or∂∂ by d 2 H = −T −1 e(dθ). Since T −1 preserves S 0 -forms, the form ω in (5.7) can thus be written as For ℓ ∈ N and j = 1, 2, we set and, for j = 1, 2 and ℓ ∈ N, The symbols W k j , W p,q j , etc. denote the L 2 -closures of the corresponding spaces W k j , W p,q j , etc.. We wish to characterize which spaces among the W p,q 0 and W p,q j,ℓ are non-trivial. Proposition 5.3. Let 0 ≤ k ≤ n and p + q = k. Then W p,q 0 is trivial if and only if k = n and 1 ≤ p, q ≤ n − 1.
We now turn to the case j = 2, which requires a more refined discussion. Let us set We claim that W p,q 2 decomposes as an orthogonal sum and clearly the two subspaces on the right-hand side are orthogonal. Assume 2 . This proves (5.11). Let us set K p,q 2,ℓ = e(dθ) ℓ K p,q 2 . Then (5.12) W p,q 2,ℓ = K p,q 2,ℓ ⊕ e(dθ) ℓ+1 W p,q 0 , and this decomposition is again orthogonal.
In order to prove the statement in the lemma for j = 2, using the orthogonal decomposition in (5.12) we may now argue as before by means of Prop. 2.1 in [MPR1] in order to see that W p,q 2,ℓ can be non-trivial only if ℓ ≤ n − p − q − 2. Moreover, to verify that e(dθ) ℓ is injective on W p,q 2 under this condition, it suffices to check injectivity on each of the subspaces on the right-hand side of (5.11). But this can be done by the same reasoning that we used for the case j = 1.
Proposition 5.7. L 2 Λ k H decomposes as the orthogonal sum We recall that W p,q 0 is non-trivial for p + q ≤ n − 1, and if p + q = n for pq = 0. Proof. We have already shown that S 0 Λ k H is contained in the sum of the subspaces on the right-hand side. It is then sufficient to show that any two S 0 -forms belonging to two different subspaces are orthogonal.
It is quite obvious that W p,q The fact that W k 0 is orthogonal to W k j,ℓ for j = 1, 2 is a consequence of the fact that W k 0 ⊂ ker ∂ * ∩ ker∂ * , whereas W k j,ℓ ⊂ ran ∂ + ran∂. To prove the remaining orthogonality relations, we shall proceed inductively. For this purpose, it will be convenient to represent the elements of W k j,ℓ in the form (5.7) with m = j + 2ℓ, and rename, for the purpose of this proof, W p,q j,ℓ as W p,q m if m = j + 2ℓ. Given m ≥ m ′ ≥ 1, there are three kinds of scalar products to consider, In the first case we have By Corollary 5.6, this is the scalar product of an element of W p,q m−1 with an element of W p ′ ,q ′ m ′ −1 . By induction on m ′ , this shows that We discuss now to what extent the pairs (ξ, η) ∈ W p,q 0 × W p,q 0 provide a parametrization of the spaces W p,q j,ℓ for j = 1, 2. Lemma 5.8. Given ξ ∈ W p,q 0 , there exists a unique ξ ′ ∈ W p,q 0 such that ∂ξ = ∂ξ ′ and C p ξ ′ = 0. An analogous statement holds for∂ in place of ∂.
There only remains the case p = 0, where C 0 = C is the orthogonal projection onto the kernel of (which in this case agrees with ker ∂). This is a self-adjoint operator, so that, by Lemma 3.4, S 0 Λ 0,q = (ker ∩ S 0 Λ 0,q ) ⊕ (ran ∩ S 0 Λ 0,q ). The commutation relation∂ * = ( − iT )∂ * from (1.20) then implies that the two subspaces in this decomposition are mapped under∂ * to ker( − iT ) ∩ S 0 Λ 0,q and ran ( − iT ) ∩ S 0 Λ 0,q , respectively. This shows that where P denotes the orthogonal projection onto the kernel of − iT. Then ξ ′ = (I − C)ξ has the desired properties. Q.E.D.
Proposition 5.12. The subspaces W p,q 0 , W p,q 1,ℓ , W p,q 2,ℓ are invariant under the action of D k .
Using the commutation relations (1.20) we see that Therefore, also W k 1 is D k -invariant. Moreover, if ξ and η are (p, q)-forms, so are ξ ′ and η ′ , hence each W p,q Therefore, As before, (1.20) implies that D k ω ∈ W k 2 , and each subspace W p,q 2 is mapped into itself. Q.E.D.

Lifting by Φ.
Denote by V p,q 0 , V p,q 1,ℓ , etc., the subspaces Φ(W p,q 0 ), Φ(W p,q 1,ℓ ), etc., of (L 2 Λ k ) d * -cl . We want to show that their closures V p,q 0 , V p,q 1,ℓ , etc. give an orthogonal decomposition of (L 2 Λ k ) d * -cl . In a way, this is not a priori obvious, because Φ is not an orthogonal map. The fact that it preserves the orthogonality of the subspaces we are working with is quite peculiar. On the other hand, the reader may have noticed already an instance of this peculiarity in the fact that a non-symmetric operator such as D k admits a rather fine decomposition into invariant subspaces which are orthogonal.
Proposition 5.13. For 0 ≤ k ≤ n we have the orthogonal decompositions where each of the subspaces V p,q 0 , V p,q j,ℓ is non-trivial and ∆ k -invariant.
Proof. Since Φ is a bijection from S 0 Λ k H onto (S 0 Λ k ) d * -cl , it follows from Proposition 5.7 that Hence it remains to show that this decomposition is orthogonal. By (5.2), this amounts to proving that Following the decomposition of L 2 Λ k described in the previous section, we continue assuming 0 ≤ k ≤ n.
In this section we describe the form that ∆ k attains on each of the subspaces of the decomposition (5.25) of (L 2 Λ k ) d * -cl . In particular, we will show that, up to conjugation with invertible operators, ∆ k acts on V p,q 0 and on each V p,q 2,ℓ as a scalar operator. For V p,q 1,ℓ instead, a further splitting will be necessary in order to reduce ∆ k to a scalar form in a similar way.
In the process, we will also describe the intertwining operators that reduce ∆ k to such scalar forms.
6.1. The case of V p,q 0 . The simplest case is the one of V p,q 0 , because Φ acts on this space as the identity map and we already know by (5.21) that D k W p,q 0 = ∆ 0 + i(q − p)T . Hence, in this case ∆ k is itself a scalar operator and we simply have the following When we pass to j = 1, 2 we want to express ∆ k in terms of the parameters (ξ, η) in the definition of W p,q j,ℓ which we can choose from the parameter spaces Z p,q = X p,q × Y p,q .
6.2. The case of V p,q 2,ℓ . According to Corollary 5.9, we can write (6.2) W p,q 2,ℓ = ω = e(dθ) ℓ (∂∂ξ + ∂∂η) : (ξ, η) ∈ Z p,q . Recall from the discussion in Section 4 and the definitions of X p,q and Y p,q (see (5.16)) that is injective when restricted to X p,q and is injective when restricted to Y p,q .
We define the operator matrix Q acting on ξ η by where, for ε, δ = ±, the expression of Q ε δ is Observe here that the operator ∆ H −T 2 +m 2 satisfies the estimate ∆ H −T 2 +m 2 ≥ m 2 ≥ 1/4, so that it has a unique positive square root.
The following identities are easily verified: (6.9) Lemma 6.3. If p + q ≤ n − 1, then the following properties hold true: To prove (i), we compute formally the determinant of Q and find by (6.9) that det Q = −4iT Γ . The formula for Q −1 is now obvious. Notice also that the operators Q ε δ leave the space W p,q 0 invariant. The remaining statement in (i) is now clear.
We set Lemma 6.4.
Proof. It suffices to show that We have Ξ p,q = (I − C p −C q )W p,q 0 , which means that it suffices to show that This follows from the identities and from (ii) of Lemma 6.3. Q.E.D.
Lemma 6.5. Let be the matrix appearing in formula (5.23). Then, −iT −1 G admits the diagonalization Proof. In order to formally compute the eigenvalues λ ± of −iT G, observe that the characteristic equation for G is Therefore, and, by (6.9), and analogously, These equations show that eigenvectors of G of eigenvalues τ ± are given, respectively, by (6.13) is indeed a matrix which formally diagonalizes −iT −1 G as claimed. Q.E.D.
Recall now from (6.5) and the definition of V p,q 1,ℓ that if we define the operator A 1,ℓ : (W p,q 0 ) 2 → L 2 Λ k as (6.14) Observe also that Lemma 6.4 shows that we may realize Z p,q in this identity as the space Q( Z p,q ) and use Z p,q as a parameter space for V p,q 1,ℓ . This has the advantage of reducing the operator D k in (5.23) to diagonal form.
By (5.23), Lemma 5.11 and Lemma 6.5, we have This suggests to further introduce the operators A ± 1,ℓ acting by (6.17) with Q ± as in (6.13). The following proposition is then immediate.
Proposition 6.6. The space V p,q 1,ℓ decomposes as the direct sum

Moreover, the linear mappings
are bijective, and the following identities hold, on W p,q 0 and Ξ p,q respectively: It should be stressed that up to this point we have not yet shown that the subspaces V p,q,+ 1,ℓ and V p,q,− 1,ℓ are mutually orthogonal. This fact will be a consequence of the analysis of the intertwining operators of the next section, see Lemma 7.6.

Unitary intertwining operators and projections
The intertwining operators for ∆ k that we have defined in the previous section where nonunitary and unbounded. In order to verify that the forms to which ∆ k , when restricted to the subspaces V p,q 0 , V p,q,± 1,ℓ and V p,q 2,ℓ , had been reduced on the corresponding parameter spaces by means of the formulas (6.1), (6.18) and (6.3) are indeed describing the spectral theory of ∆ on these subspaces, we need to replace the previous intertwining operators by unitary ones. Our next tasks will therefore be the following ones: (1) replace these intertwining operators with unitary ones; (2) determine the orthogonal projections from L 2 Λ k onto V p,q 0 , V p,q,± 1,ℓ and V p,q 2,ℓ , the L 2closures of the invariant subspaces V p,q 0 , V p,q,± 1,ℓ and V p,q 2,ℓ . These two tasks can be accomplished simultaneously by making use of the polar decomposition of the intertwining operators.
We shall repeatedly use the following basic fact from spectral theory (compare [RS] for the case H = K).
Proposition 7.1. Let H, K be Hilbert spaces and A : dom A ⊂ H → K be a densely defined, closed operator. Then there exist a positive self-adjoint operator |A| : dom A ⊂ H → H, with dom |A| = dom A, and a partial isometry U : H → K with ker U = ker A and ran U = ran A, so that A = U |A|. |A| and U are uniquely determined by these properties together with the additional condition ker |A| = ker A.
Moreover, |A| = √ A * A, U * U is the orthogonal projection from H onto (ker A) ⊥ = ran A * , and U U * is the orthogonal projection from K onto ran A = (ker A * ) ⊥ .
In order to pass from a possibly unbounded intertwining operator to a unitary one, we also need the following general principle.
Proposition 7.2. Let H 1 , H 2 be Hilbert spaces and let D 1 ⊂ H 1 , D 2 ⊂ H 2 be dense subspaces. Assume that for j = 1, 2, S j : dom S j ⊂ H j → H j is a self-adjoint operator on H j for which D j is a core such that S j (D j ) ⊂ D j . Moreover, let A : dom A ⊂ H 1 → H 2 be a closed operator such that the following properties hold true: (i) D 1 ⊂ dom A and A(D 1 ) ⊆ D 2 ; (ii) A intertwines S 1 and S 2 on the core D 1 , i.e., Consider the polar decomposition A = U |A| from Proposition7.1, where |A| = √ A * A, and where U : H 1 → H 2 is a partial isometry, and assume furthermore that D 1 ⊂ dom |A|, and that (iii) |A|(D 1 ) = D 1 ; (iv) the commutation relation (7.2) S 1 |A|ξ = |A|S 1 ξ for all ξ ∈ D 1 holds true on the core D 1 . Then, also U intertwines S 1 and S 2 on the core D 1 , i.e., U (D 1 ) = A(D 1 ) ⊂ D 2 , and (7.3) U S 1 ξ = S 2 U ξ for all ξ ∈ D 1 .
Moreover, we have ran A = A(D 1 ) = U (H 1 ), ker A = ker |A| = ker U, and P := U U * is the orthogonal projection from H 2 onto A(D 1 ).

Let us finally denote by
, then U is injective, and we even have that U (dom S 1 ) = dom S r 2 , and S r 2 = U S 1 U −1 on dom S r 2 . Proof. Let us re-write (7.1) as U |A|S 1 ξ = S 2 U |A|ξ for all ξ ∈ D 1 . Applying (7.2), we find that U S 1 (|A|ξ) = S 2 U (|A|ξ) for all ξ ∈ D 1 , which implies (7.3) because of (iii). Note that U (D 1 ) = A(D 1 ) ⊂ D 2 in view of (iii). Since A is closed and D 1 is a core for A, we have ran A = A(D 1 ), and the remaining statements about ran A, ker A and U U * are obvious by Proposition 7.1.
If we assume in addition that (v) and (vi) hold true, then clearly U is injective. Moreover, (7.3) implies that U (I − iS 1 )ξ = (I − iS 2 )U ξ for all ξ ∈ D 1 .
Assume now that x ∈ dom S 2 ∩ ran U. Then x = U y for some unique y ∈ H 1 . Choose a sequence {x n } n in D 2 such that x n → x and S 2 x n → S 2 x.
Since S 2 x n = S 2 (P x n ) + S 2 ((I − P )x n )), where the components in this decomposition lie in mutually orthogonal spaces, we see that there is some z = U w ∈ P (D 2 ) ⊂ U (H 2 ) such that P x n → x = U y and S 2 (P x n ) → z = U w.
We can write P x n in a unique way as P x n = U y n , with y n ∈ D 1 , since U (D 1 ) = A(D 1 ) = P (D 2 ). Since U is isometric on H 1 , we then must have that y n → y. Moreover, by (7.3), U S 1 y n = S 2 U y n = S 2 (P x n ) → z, so that S 1 y n → w. This shows that y ∈ dom S 1 , hence x = U y ∈ U (dom S 1 ). Q.E.D.
Remark 7.3. If we do not require that the crucial commutation relation in (iv) is satisfied, but that in addition to the conditions (i) to (iii) the natural assumptions D 2 ⊂ dom A * and A * (D 2 ) ⊂ D 1 hold true, then one can conclude that (7.5) S 1 |A| 2 ξ = |A| 2 S 1 ξ for all ξ ∈ D 1 .
One might hope that (7.2) would follow from (7.5) by means of general spectral theory. However, this hope is destroyed by a classical example due to Nelson (cf. [RS]), which shows that condition (7.5) will in general not suffice to conclude that the operators S 1 and |A| 2 commute, in the sense that their respective spectral resolutions commute. This, however, would be needed in order to derive (7.2).
However, in our applications, S 1 will turn out to be a scalar operator on the Heisenberg group, and A a positive square matrix whose entries are scalar operators too, so that (7.2) will easily follow from formula (12.7) for the square root of such a matrix.
In the sequel, by P H 1 : H → H 1 we shall denote the orthogonal projection from the Hilbert space H onto its closed subspace H 1 .
In our later applications of Proposition 7.1, the next observation will often facilitate the computation of the corresponding operators A * A.
Lemma 7.4. Let H, K be Hilbert spaces and H 1 ⊆ H and K 1 ⊆ K be closed subspaces. Let A : dom A ⊂ H → K be a densely defined, closed operator, and assume that D ⊂ dom A is a core for A. Assume furthermore that D 1 := D ∩ H 1 is dense in H 1 and that dom A 1 := dom A ∩ H 1 is mapped under A into K 1 , so that the operator A 1 : dom A 1 ⊂ H 1 → K 1 , given by restricting A to dom A 1 := dom A ∩ H 1 , is densely defined and closed.
Under these conditions, also A * is densely defined, and dom A * ∩ K 1 ⊂ dom A * 1 . We shall further assume that E ⊂ K is a subspace of dom A * such that A(D) ⊂ E and A * (E) ⊂ D (so that, in particular, E 1 := E ∩ K 1 is contained in dom A * 1 ). Then we have In particular, if we know that A * A maps D 1 into H 1 , then A * 1 A 1 ξ = A * Aξ for every ξ ∈ D 1 .

This implies that
7.1. A unitary intertwining operator for V p,q 0 .
We recall from the preceding discussion that the intertwining operator on V p,q 0 is Φ, which reduces to the identity on this space. Hence, this case is trivial.

Unitary intertwining operators for
Our next goal is to replace the intertwining operators A ± 1,ℓ from Proposition 6.6 by unitary ones. Recall from Proposition 6.6 and (6.17) that where, according to (6.14), According to Proposition 7.2, we seek to define unitary intertwining operators U ± 1,ℓ by defining which are expected to be isometries from the closed subspaces W p,q 0 = W p,q , resp. Ξ p,q , onto their ranges V p,q,± 1,ℓ . Recall, however, that we have not shown yet that the latter spaces are mutually orthogonal; this will in fact follow easily from the subsequent discussions. Now, since (A ± 1,ℓ ) * A ± 1,ℓ = Q ± * (A * 1,ℓ A 1,ℓ )Q ± , we shall begin by computing A * 1,ℓ A 1,ℓ . Subsequently, we will compute the product Q * A * 1,ℓ A 1,ℓ Q, showing, in particular, that it is a diagonal matrix. The diagonal terms will give the explicit forms of (A ± 1,ℓ ) * A ± 1,ℓ , whereas the vanishing of the off-diagonal terms will prove the orthogonality of the spaces V p,q,± 1,ℓ . Since these computations are tedious and unenlightening, we shall only state here the relevant identities, postponing their proofs to the Appendix.
Let us set, for s + j ≤ n, bijectively onto itself, and maps Ξ p,q bijectively onto itself, and is zero on C p W p,q 0 ⊕C q W p,q 0 , the orthogonal complement of Ξ p,q in W p,q 0 . Proof. The proof of the formulas for the components of R is postponed to the Appendix. Given these formulas, we prove here the mapping properties of R 11 and R 22 .
On W p,q 0 , R 11 acts as a symmetric scalar operator. Since ∆ H = L + i(q − p)T , Γ and −T 2 are positive operators, we have It follows that the operators (R 11 ) λ,σ in (3.1) also satisfy the same inequality from below, and hence are invertible. Applying Lemma 3.2 (ii), we obtain that R 11 admits an inverse R −1 11 : S 0 −→ S 0 . We tensor with Λ p,q and restrict R −1 11 to W p,q 0 . By (1.20), the composition ∂ * R 11 can be expressed as R ′ 11 ∂ * , with R ′ 11 differing from R 11 in that ∆ H is replaced by ∆ H − iT (also in the expression of Γ), and similarly for∂ * R 11 , ∂ * R −1 11 and∂ * R −1 11 . Therefore, R 11 maps W p,q 0 bijectively onto itself. As to R 22 , we first observe that (7.10) so that (7.11) . Moreover, by the injectivity of R 11 , ker R 22 = ker R 11 R 22 = ker ⊕ ker .
In order to repeat the same argument used above for R 11 , we start from the operator R 22 = R 22 + δ p,0 C + δ q,0C (with δ denoting the Kronecker symbol) acting on scalar-valued functions. By (7.11), R 22 in invertible on S 0 and, after tensoring and restricting, it is also invertible on The conclusion now follows at once. Q.E.D.
Proof. Obviously, R maps the subspace Z p,q of (W p,q 0 ) 2 into itself, so that the identities follow from Lemma 7.6 and Lemma 7.4. The first statements are obvious. And, since the matrix Q * N Q is diagonal, so is A * 1,ℓ A 1,ℓ . Thus, the map A 1,ℓ preserves the orthogonality of the coordinate subspaces W p,q Let us finally compute U ± 1,ℓ more explicitly. To this end, notice that if we combine the columnvectors of operators U ± 1,ℓ to form a square matrix, then Recall that we have set s = p + q and m = (n − s)/2.
Proposition 7.8. We have that where (7.14) and Σ 11 , Σ 22 are given by Proof. From Corollary 7.7, we have We verify that the factor T −1 in the second row is going to disappear. From Lemma 5.5, we have In particular, Finally, a similar computation shows that as we claimed. In order to conclude the proof, it suffice to notice that Σ jj = R − 1 2 jj , j = 1, 2, where R jj are given in Lemma 7.6. Q.E.D.
We wish now to apply Proposition 7.2 to A ± 1,ℓ . We restrict ourselves to A − 1,ℓ , the other case being simpler.
We set and denote by A the closure of A − 1,ℓ . The commutation relation (7.1) is then satisfied because of (6.18). Moreover, clearly S 2 (D 2 ) ⊂ D 2 .
Next, according to Corollary 7.7, A * A is a positive scalar operator, and so is S 1 . But then 22 is a scalar operator, hence commutes with S 1 , so that condition (iv) in Proposition 7.2 is satisfied too.
Conditions (iii) and (v) of Proposition 7.2 follow from Lemma 7.6 and condition (vi) is obvious. Finally, our explicit formulas for U = U − 1,ℓ in Proposition 7.8 show that here U maps the space Ξ p.q into S 0 Λ k , so that U * maps S 0 Λ k into Ξ p.q , and we see that . This shows that also condition (vii) is satisfied. Q.E.D.
In the same way, we see that all the hypotheses of Proposition 7.2 are satisfied by U − 1,ℓ , and as a consequence we obtain Proposition 7.9. U ± 1,ℓ defined by (7.7) maps W p,q 0 , respectively Ξ p,q , onto V p,q,± 1,ℓ and intertwines D ± with ∆ k on the core.

7.3.
A unitary intertwining operator for V p,q 2,ℓ . We next wish to replace the intertwining operator A 2,ℓ from Proposition 6.2 by a unitary one, denoted by U 2,ℓ = U p,q 2,ℓ , which, according to Proposition 7.2, should be given by A 2,ℓ (A * 2,ℓ A 2,ℓ ) − 1 2 . In fact, it will be convenient to modify this expression introducing the unitary central factor σ(T ) = i −1 T /|T |.
Recall that the non-unitary intertwining operator A 2,ℓ from Z p,q to V p,q 2,ℓ is (7.18) Since A 2,ℓ acts on Z p,q , the identities in Lemma 5.5 in combination with (1.20) imply that Therefore, Lemma 7.10. We have (i) with E as above.
Moreover, A * 2,ℓ A 2,ℓ 1 2 maps Z p,q bijectively onto itself, and, on Z p,q , whereM and ∆ ′ are given bỹ Proof. The proof of the formulas is postponed to the Appendix, where we also prove the identity Hence we only prove here that A * 2,ℓ A 2,ℓ 1 2 maps Z p,q bijectively onto itself, assuming the validity of (7.20) and (7.23).
We can factor E as Applying Lemma 3.2 as in the proof of Lemma 7.6, we can conclude that the operator Restricting to Z p,q , we obtain the conclusion. Q.E.D.
Some cancellations occur when we proceed to computing the matrix product A 2,ℓM , as the next lemma shows.
Next, using (1.20) and the identity (5.19), with s = p + q in place of k, we have This proves the lemma. Q.E.D.
From the previous results we immediately get an explicit formula for U 2,ℓ , at least when p = 0 and q = 0. However, if p = 0 or q = 0, our formulas, when properly interpreted, persist, and we obtain the following result: Recall that if p = 0, then X p,q = (I − C)X p,q , and if q = 0, then Y p,q = (I − C)Y p,q . Let us correspondingly put so that r is always invertible on X p,q , and r on Y p,q .
Proposition 7.12. The operator U 2,ℓ , which acts on Z p,q , is given by and where ∆ ′ , ∆ ′′ and c are given by Lemma 7.10.
Finally, we have the following analogue of Proposition 7.9.
Proof. This will follow by applying Proposition 7.2 to A 2,ℓ . To this end, we set and denote by A the closure of A 2,ℓ on Z p,q . The commutation relation (7.1) is then satisfied because of (6.3). Moreover, clearly S 2 (D 2 ) ⊂ D 2 , and A maps D 1 bijectively onto V p,q 2,ℓ ⊆ D 2 . Next, according to Lemma 7.10, A * A is a positive matrix with scalar operator entries, and S 1 = S 1 I is a scalar operator. But then also |A| = √ A * A is a matrix with scalar operator entries, hence commutes with S 1 I, so that condition (iv) in Proposition 7.2 is satisfied too.
In order to verify conditions (iii), (v) and (vi), we can make use of the joint spectral theory of L and i −1 T described in Section 9. Indeed, it is immediate by means of the spectral decomposition of S 1 that (vi) is satisfied.
Moreover, |A| maps Z p,q into itself; this can be verified as follows: The formula for |A| = A * 2,ℓ A 2,ℓ 1 2 in Lemma 7.10 shows that it suffices to prove that the operator matrix E maps Z p,q into itself. This in return will be verified if we can show that E 12 maps Y p,q into X p,q , and E 21 maps X p,q into Y p,q . But, according to Lemma 12.3 in the Appendix, maps W p,q 0 into Ξ p,q , so that the latter claims are immediate. And, the formula for A * 2,ℓ A 2,ℓ in Lemma 7.10 in combination with Lemma 10.9 and Plancherel's theorem shows that A * 2,ℓ A 2,ℓ = |A| 2 has a trivial kernel in L 2 , and then the same applies to |A|, which proves (v).
Finally, our explicit formulas for U = U 2,ℓ in Proposition 7.12 show that U maps the space Z p.q into S 0 Λ k , so that U * maps S 0 Λ k into Z p.q , and we see that P (D 2 ) = P (S 0 Λ k ) = U (Z p,q ) = A |A| −1 (Z p,q ) = A(Z p,q ) = A(D 1 ), where P = U U * . This shows that also condition (vii) is satisfied, which concludes the proof of Proposition 7.13. Q.E.D.

Decomposition of L 2 Λ k
We are now in the position to completely describe the orthogonal decomposition of L 2 Λ k into ∆ k -invariant subspaces and the unitary intertwining operators that reduce ∆ k into scalar form.
Theorem 8.1. Let 0 ≤ k ≤ n. Then L 2 Λ k admits the orthogonal decomposition where R = R k−1 denotes the Riesz transform.
The Hodge Laplacian ∆ k leaves all the subspaces in this decomposition invariant, and we have seen that, after applying the unitary intertwining operators derived in the previous sections, it will assume a scalar form on each of the corresponding parameter spaces.
In Table 1, we list these subspaces, the corresponding scalar forms of ∆ k , the associated unitary intertwining operators as well as the orthogonal projections onto these subspaces.
By J, we denote the inclusion the operator of a given subspace into L 2 Λ k .
We now remove the condition 0 ≤ k ≤ n and prove a decomposition theorem for L 2 Λ k also in the case n < k ≤ 2n + 1.
We are going to use the * -Hodge operator defined on an arbitrary Riemannian d-manifold M, acting for each point m ∈ M as a linear mapping * : Λ k m → Λ d−k m , where Λ k m denotes the k-th exterior product of the dual of the tangent space at m. It will be viewed also as a linear mapping acting on forms on M. For its definition and basic properties we refer to [Ra]. We summarize the main properties in the following statement.
In our situation, M = H n and d = 2n + 1. It follows from property (4) above that a subspace V ⊆ L 2 Λ k is ∆ k -invariant if and only if * V ⊆ L 2 Λ d−k is ∆ d−k -invariant. Thus, we wish to describe the ∆ k -invariant subspaces of L 2 Λ k , when n < k ≤ 2n + 1.
We denote by Λ k V the space of vertical k-forms, that is, the forms ω = θ ∧ ω 2 , with ω 2 ∈ Λ k−1 H , and by µ = θ ∧ dθ ∧ · · · ∧ dθ the volume element on H n . Similarly, µ H = dθ ∧ · · · ∧ dθ will denote the corresponding volume element on the horizontal structure. In the same way as the * -Hodge operator on H n is determined by the relations σ ∧ * ω = σ, ω µ for all σ, ω ∈ Λ k , we can introduce the * -Hodge operator * H acting on the horizontal structure, by requiring that σ ∧ * H ω = σ, ω µ H for all σ, ω ∈ Λ k H . The following results are easy consequences of these defining relations.
This shows that * (W p,q 0 ) ⊂ Z r,s 0 , and in a similar way one proves that * (Z r,s 0 ) ⊂ W p,q 0 . Combining this with (ii), we obtain (iii).
Next, if ω = ∂ξ +∂η, with ξ, η ∈ W p,q 0 , then This shows that * (W p,q 1 ) ⊂ Z r,s 1 , and in a similar way one proves that * (Z r,s 1 ) ⊂ W p,q 1 , and we obtain (iv) in the case ℓ = 0. For the general case, we observe that, for all test forms ω and σ,
The proof about the non-triviality of these subspaces follows from Propositions 5.3 and 5.4. Q.E.D.
Definition 8.5. When r + s ≥ n we set and denote by Υ r,s 0 , Υ r,s,± 1,ℓ respectively Υ r,s 2,ℓ the closures of these spaces in L 2 Λ k .
Theorem 8.6. Let n < k ≤ 2n + 1. Then L 2 Λ k admits the orthogonal decomposition where R * = R * k+1 . Moreover, since the * -Hodge operator transform the subspaces in this decomposition into the corresponding subspaces in the decomposition given by Theorem 8.1, with p := n−s and q := n−r, the unitary intertwining operators which transform ∆ k on each of these subspaces into scalar forms are simply given by those from Table 1 at the end of Section 8, composed on the right hand side by the * -Hodge operator, and similar remarks apply to the orthogonal projections and scalar forms.

L p -multipliers
The decomposition of L 2 Λ k presented in the previous sections, together with the description of the action of ∆ k on the various subspaces, can be used for the L p -functional calculus of ∆ k . For this purpose, we are going to show that L p Λ k admits the same decomposition when 1 < p < ∞. Concretely, this means proving that the orthogonal projections on the various invariant subspaces and the intertwining operators that reduce ∆ k to scalar forms are L p -bounded.
The joint spectrum of L and i −1 T is the Heisenberg fan F ⊂ R 2 defined as follows. If The variable λ corresponds to i −1 T and ξ to L, i.e., calling dE(λ, ξ) the spectral measure on F , then If m is any bounded, continuous function on R × R * + , we can then define the associated multiplier operator m(i −1 T, L) by which is clearly bounded on L 2 (H n ).
It follows from Plancherel's formula that the spectral measure of the vertical half-line {(0, ξ) : ξ ≥ 0} ⊂ F is zero. A spectral multiplier is therefore a function m(λ, ξ) on F whose restriction to each ℓ k is measurable w.r. to dλ for every k.
We shall use the following results from [MRS1,MRS2] concerning L p -boundedness of spectral multipliers, see also Section 5 in [MPR1].

Some classes of multipliers.
We introduce the classes Ψ ρ,σ τ of (possibly unbounded) smooth multipliers, in terms of which we will understand the behavior of the projections and intertwining operators presenteded in the previous sections.
(ii) If we are given a multiplier m, which satisfies the inequalities (9.4), but is only defined on an angle Γ leaving out a finite number of half-lines ℓ k,± of F , we can easily extend m to a multiplier in Ψ ρ,σ τ which vanishes identically on the missing lines. (iii) Property (iii) in Lemma 9.4 also applies to the situation where s > 0, (9.5) only holds on an angle omitting a finite number of half-lines in F , and m vanishes identically on these half-lines.
We denote by the same symbol Ψ ρ,σ τ the class of operators defined by the multipliers in this class. For notational convenience, we shall often use the same symbol to denote an operator M ∈ Ψ ρ,σ τ and (a convenient choice of) its multiplier M (λ, ξ).

Decomposition of L p Λ k and boundedness of the Riesz transforms
Since the letter p is already used to denote degrees of differential forms, the summability exponent will be denoted by r.
If V is any of the spaces V p,q 0 , V p,q 1,ℓ , Υ p,q 1,ℓ , etc., by r V we shall denote the closure of this space in L r Λ k . Our goal will be to prove the following theorem, whose parts (i) and (ii) extend Theorems 8.1 and 5.13.
(i) k For 0 ≤ k ≤ n, L r Λ k admits the direct sum decomposition By L r -boundedness of the * -Hodge operator, we can restrict ourselves to the case 0 ≤ k ≤ n.
The proof is based on the following lemma.
Lemma 10.2. Let U = U 11 U 12 U 21 U 22 denote any of the operators U p,q 1,ℓ in (7.13) or U p,q 2,ℓ in (7.25). Then each component U ij of U consists of a multiplier operator in Ψ 0,0 0 , possibly composed with powers of e(dθ) and the holomorphic and antiholomorphic Riesz transforms R,R.
In particular, for 1 < r < ∞, all these operators are L r -bounded on the spaces of differential forms of the appropriate (bi-)degrees.
This lemma will be proved in the last part of this section. Taking it for granted, we give the proof of the theorem.
Assume that (i) k−1 and (ii) k−1 hold, and consider anyone of the orthogonal projections in the last column of Table 1. This is a product (or a sum of products) of factors, each of which can be either R k−1 , or its adjoint R * k−1 , or P = U U * , U being one of the operators in Lemma 10.2. Then (i) k follows easily.
We prove now the implication (i) k =⇒ (ii) k . Factoring and using (ii) 0 , it suffices to prove the boundedness of ∆ Referring to the decomposition (10.1), we disregard the d-exact components of L r Λ k (i.e., those with R k−1 ), on which R k = 0, and adopt the simplified notation Denote by U β : r Z β −→ r V β the L r -closure of the unitary intertwining operator in Table 1, with r Z β denoting the L r -closure of the appropriate space X p,q , Y p,q or Z p,q in (5.16). Let where D β = U * β ∆ k U β is the scalar operator appearing in (6.1), (6.3), (6.18). Explicitely, Denote by m β be the spectral multiplier of D β . Then, for each of the above cases, , depending on the case. Combining together Lemma 10.2, Lemma 9.4 (ii) and (v), and the fact that the multiplier ξ + λ 2 of ∆ 0 is in * Ψ 1,0 1 , we conclude that the composition ∆ has all its components in Ψ 0,0 0 . Therefore, Q.E.D.
Recall that if p = 0, then ∂ = ∂(I − C), and if q = 0, then∂ =∂(I −C), so that, putting again where R and R are the holomorphic and antiholomorphic Riesz transforms of (4.5), which are known to be Calderón-Zygmund type singular integral operators, and consequently are L rbounded for 1 < r < ∞. Moreover, observe that∂∂ − ∂∂ = 2∂∂ + T e(dθ). Since this term appears only when ℓ ≥ 1 and p + q + 2ℓ ≤ n, we have p + q ≤ n − 2, which easily implies that the operator + iT is injective on its domain in L 2 Λ p,q , so that we can factorize since, on the core, ∂ = ∂( + iT ), hence Observe also that Ξ p,q = (I − C p −C q )(W p,q 0 ). Thus it will suffice to prove that the following scalar operators are in Ψ 0,0 0 : This will be a direct consequence of the following Lemmas 10.4, 10.5, 10.6, on the basis of Lemma 9.4. Q.E.D.
Lemma 10.5. For p + q + 2ℓ + 1 ≤ n, the following hold true: and, sinceξ ∼ ξ, this shows that Γ ∈ * Ψ 1 2 ,0 0 . Then (a) follows easily. As for R 11 , recall that (10.7) By Lemma 9.4, and in view of what has been shown already, we find that which shows that also the estimates from below for R 11 hold true, so that R . This concludes the proof of (b). Q.E.D.
Lemma 10.6. For p + q + 2ℓ + 1 ≤ n, the following hold true: . By (7.11), on W p,q 0 we have the identity Applying Lemma 10.5 (b), we obtain that R 22 ∈ Ψ 1,1 2 . To prove the last part of the statement, observe that the presence of the factor I − C p −C q allows us, on the basis of Remark 9.5 (ii), to restrict, if necessary, our analysis to an angle omitting one of the external half-lines of F , where the multipliers of and are non-zero, and their reciprocal satisfy (9.5) with ρ = 0 and σ, τ = −1. Each of remaining factors in (10.8) is in * Ψ 1,0 0 , and this, together with Lemma 10.5 (b), gives the conclusion. Q.E.D.
We next turn to the intertwining operator U 2,ℓ . Our goal will be to prove Proposition 10.7. Assume that p + q + 2 + 2ℓ ≤ n and 1 < r < ∞. Then there is a constant C r so that In view of the explicit expression for U 2,ℓ in Proposition 7.12, it will suffice to prove that the operators H 11 √ ∆ ′ ∆ ′′ and H 21 √ ∆ ′ ∆ ′′ are L r -bounded on X p,q , and the operators H 12 √ ∆ ′ ∆ ′′ and H 22 √ ∆ ′ ∆ ′′ on Y p,q (notice the the multiplier σ(T ) corresponds essentially to the Hilbert transform along the center of the Heisenberg group, which is L r -bounded).
We shall prove the estimates on X p,q only, since the estimates on Y p,q follow along the same lines.
Using again the factorizations (10.3), (10.4) by means of Riesz transforms, we see that we are reduced to estimating the following scalar operators on X p,q with respect to the L r -norm: variable of the Heisenberg group with the singular integral operatorC, which shows that they are L r -bounded, for 1 < r < ∞, too.
This completes the proof of Proposition 10.7.
The proof follows the same lines as in [MPR1].
As a corollary to Theorem 10.1 and its proof, we can derive the following extension of Lemma 4.2 in [MPR1].
We have seen that the operator ∆ . As in the proof of Lemma 4.2 in [MPR1], we can thus conclude that v ∈ L r Λ k−1 . And, if ξ ∈ SΛ k , then and by the same lemma R k ω = ∆ − 1 2 k+1 dω, where dω = d 2 u = 0 in the sense of distributions. This implies that ω = dv, and thus also that ω ∈ R k−1 (L 2 Λ k−1 ). Q.E.D.
Corollary 11.3. If ω ∈ L 2 Λ k , then ω ∈ R k−1 (L 2 Λ k−1 ) if and only if there is some u ∈ D ′ Λ k−1 such that ω = du in the sense of distributions.
Proof. One implication is immediate by Lemma 11.2. To prove the converse implication, let us assume that ω ∈ R k−1 (L 2 Λ k−1 ). Then, according to Lemma 4.4 and Proposition 4.5, ω = R k−1 R * k−1 ω. Moreover, if we define v as in the proof of Lemma 11.2, then v ∈ L r Λ k−1 and dv = R k−1 R * k−1 ω, hence dv = ω. We may thus choose u = v. Q.E.D.
Let us denote by Λ = 2n+1 ⊕ k=0 Λ k the Grassmann algebra of h * n , and by L p Λ = L p (H n )Λ = 2n+1 ⊕ k=0 L p Λ k , SΛ etc. the space of L p -section, S-sections etc. of the corresponding bundle over H n .
The Dirac operator acting on SΛ is given by Notice that D 2 = ∆ on dom (∆), that is, the Dirac operator D and the Hodge Laplacian ∆ commute as differential operators.
However, in order to reduce the spectral theory of D to that one of ∆ we need to show that D and ∆ strongly commute, in the sense that the all spectral projections in the spectral decompositions of D and ∆ commute.
Proposition 11.4. We have that D 2 = ∆. In particular, D and ∆ strongly commute.
Proof. Recall from the previous section that the Riesz transform R = d∆ − 1 2 and its adjoint R * = ∆ − 1 2 d * = d * ∆ − 1 2 are L p -bounded for 1 < p < ∞. Let us put One easily verifies that P 2 ± = P ± and P * ± = P ± , so that P + and P − are orthogonal projections, which are in fact L p -bounded for 1 < p < ∞. Moreover, Indeed, notice that, since the operators R, R * are bounded and commute with ∆ on the core, they also commute with the spectral projectionsẼ(B), i.e., (11.4)Ẽ(B)P ± = P ±Ẽ (B) .
Moreover, we clearly have F (A) = F (A + ) + F (A − ) , and P + P − = P − P + = 0. This implies that F (A) is an orthogonal projection, and that F is a spectral measure on R.
We set Thus, it remains to show thatD ⊂ D 0 .
Let ϕ be a smooth cut-off function, even, identically 1 on K and with support contained in K ′ = I ′ ∪ (−I ′ ), where I ′ = [a ′ , b ′ ] with 0 < a ′ < a < b < b ′ . Then, where ψ is a smooth function with compact support in K ′ 2 = {s 2 : s ∈ K ′ }.

Appendix
In this final section we collect some technical facts and proofs that we have previously set aside.
We need some preliminary computations.
This proves the lemma. Q.E.D.
Lemma 12.3. The following properties hold true.
(i) For all p, q, W p,q 0 = X p,q and W p,q 0 = Y p,q . (ii) In particular, maps X p,q into itself, and maps Y p,q into itself. We therefore set (12.4) X = | X p,q : X p,q → X p,q , Y = | Y p,q : Y p,q → Y p,q .
Then −1 X and −1 Y are well defined on X p,q and Y p,q , respectively.
Proof. (ii) is immediate from (i). To prove (i), we verify the statements concerning the spaces X p,q , the discussion of the spaces Y p,q being similar. Observe first that leaves W p,q 0 invariant because of (1.20).
Completion of the proof of Lemma 7.6. Since Q maps the subspace (W p,q 0 ) 2 into itself, we have that We compute first R 11 . Using the identity − = 2imT and (6.9), we have −T 2 R 11 = (Q * N Q) 11 In the same way one finds that Finally, using the formulas in (6.9) we see that −T 2 R 12 = −T 2 R 21 = (Q * N Q) 12 Q.E.D.
Proof of Lemma 7.10 (i). In order to compute A * 2,ℓ A 2,ℓ we apply again Lemma 7.4. We first define B 2,ℓ as the term on the right hand side of (7.19) acting as an unbounded operator from L 2 Λ p,q 2 to L 2 Λ p,q 2 , with core S 0 Λ p,q 2 , and then we compute B := B * 2,ℓ B 2,ℓ | (W p,q 0 ) 2 . If we then show that B maps Z p,q into itself, the equality A * 2,ℓ A 2,ℓ = B | Z p,q , which in turn equals −c s+1,ℓ T −2 E, will follow.

From this it follows that
where we have used the the equality (1.19). Thus, the statement for E 11 follows. The term E 22 is its complex conjugate and thus it follows as well.
It is now easy to check that B maps Z p,q into itself. For, suppose that ξ ∈ X p,q and η ∈ Y p,q , and put σ = B 11 ξ + B 12 η. Observe that both B 11 and B 22 factor as B 1j = D 1j , j = 1, 2, where D 1j leaves W p,q 0 invariant. Therefore Lemma 12.3 shows that σ ∈ X p,q . In a similar way, one shows that B 21 ξ + B 22 η ∈ Y p,q, which concludes the proof. Q.E.D.
To compute the square root of a matrix, we shall make use of the following formula, which is an application of Cayley-Hamilton's theorem: For a positive definite 2 × 2 matrix A we have that Proof of Lemma 7.10 (ii). We have The result for (A * 2,ℓ A 2,ℓ ) − 1 2 now follows easily, since where M 11 and M 22 are as claimed in (7.21). Q.E.D.