_{1}

^{*}

Unbounded operators can transform arbitrarily small vectors into arbitrarily large vectors—a phenomenon known as instability. Stabilization methods strive to approximate a value of an unbounded operator by applying a family of bounded operators to rough approximate data that do not necessarily lie within the domain of unbounded operator. In this paper we shall be concerned with the stable method of computing values of unbounded operators having perturbations and the stability is established for this method.

The stable computation of values of unbounded operators is one of the most important problems in computational mathematics. Indeed, let A be a linear operator from X into Y with domain D ( A ) ⊂ X and range R ( A ) ⊂ Y , where X and Y are normed spaces and A is unbounded, that is, there exists a sequence of elements x n ∈ D ( A ) , n = 1 , 2 , ⋯ , such that ‖ A x n ‖ → + ∞ as n → ∞ . Let x 0 ∈ D ( A ) and y 0 = A x 0 . We put x n , δ = x 0 + δ x n , where δ is an arbitrarily small number. Let y n , δ = A x n , δ . Then

‖ y n , δ − y 0 ‖ = δ ‖ A x n ‖ → + ∞ , ∀ δ > 0 ,

while ‖ x n , δ − x 0 ‖ = δ may be arbitrarily small.

Therefore, the problem of computing values of an operator in the considered case is unstable [

In the case, where A is a closed densely defined unbounded linear operator from a Hilbert space X into a Hilbert space Y, V. A. Morozov has studied a stable method for approximating the value A x 0 when only approximate data x δ is available [

Φ α δ ( z ) = ‖ z − x δ ‖ 2 + α ‖ A z ‖ 2 , z ∈ D ( A ) , α > 0. (1)

He shows that, if α = α ( δ ) → 0 as δ → 0 , in such a way that δ α → 0 , then y α δ → A x 0 as δ → 0 . Moreover, the order of convergence results for { y α δ } have been established [

In the another case, where A is a monotone operator from a real strictly convex reflexive Banach space X into its dual X ∗ , an approximation to y 0 = A x 0 is the element y α δ = − U ( x α δ − x δ ) / α , where x α δ is the unique solution of the equation

α A x + U ( x − x δ ) = 0,

where U : X → X ∗ is the dual mapping in X [

We now assume that both the operator A and x 0 ∈ D ( A ) are only given approximately by A h and x δ ∈ X , which satisfy

‖ x δ − x 0 ‖ ≤ δ , and ‖ A h x − A x ‖ ≤ h , ∀ x ∈ D = D ( A h ) ∩ D ( A ) , h , δ > 0, (2)

where A h is also an operator from X into Y. We should approximate values of A when we are given the approximations A h and x δ . Until now, this problem is still an open problem.

In this paper we shall be concerned with the construction of a stable method of computing values of the operator A for the perturbations (2).

In this section, we assume that A : D ( A ) ⊂ X → Y is a closed densely defined unbounded linear operator from a Hilbert space X into a Hilbert space Y with domain D ( A ) ⊂ X and x 0 ∈ D ( A ) . ( A , x 0 ) is called an exact data.

Instead of the exact data ( A , x 0 ) , we have an approximation ( A h , x δ ) , which satisfies (1.2), where A h is also a closed densely defined unbounded linear operator from X into Y with domain D ( A h ) = D ( A ) , ∀ h > 0 .

First, we define the regularization functional

Φ Δ ( z ) = ‖ z − x δ ‖ 2 + α ‖ A h z ‖ 2 , ∀ z ∈ D ( A h ) , (1)

where α > 0 is called the regularization parameter, Δ = ( h , δ , α ) .

We shall take as an approximation to y 0 = A x 0 the element y Δ = A h z Δ , where z Δ minimizes the regularization functional Φ Δ ( z ) over D ( A h ) .

Theorem 2.1. [

z Δ = ( I + α A h ∗ A h ) − 1 x δ . (2)

Hence

y Δ = A h ( I + α A h ∗ A h ) − 1 x δ . (3)

To establish the convergence of (3), it will be convenient to reformulate (3) as

y Δ = A h A ⌣ h [ α I + ( 1 − α ) A ⌣ h ] − 1 x δ , (4)

where A ⌣ h = ( I + A h ∗ A h ) − 1 .

A ⌣ h , A h A ⌣ h are known to be bounded everywhere defined linear operators and A ⌣ h is a self-adjoint with spectrum σ ( A ⌣ h ) ⊂ [ 0,1 ] ( [

To further simplify the presentation, we introduce the functions

T α ( t ) = [ α + ( 1 − α ) t ] − 1 , α > 0 , t ∈ [ 0 , 1 ] .

We then have

y Δ = A h A ⌣ h T α ( A ⌣ h ) x δ . (5)

We also denote

y h , α = A h A ⌣ h T α ( A ⌣ h ) x 0 . (6)

The following lemma will be used in the proof of Theorem 2.2.

Lemma 2.1. Under the stated assumption, we obtain

A h A ⌣ h = A ^ h A h ,

where A ^ h = ( I + A h A h ∗ ) − 1 .

Proof. We denote

G ( A h ) = { ( x , A h x ) : x ∈ D ( A h ) }

V G ( A h ∗ ) = { ( − A h ∗ y , y ) : y ∈ D ( A h ∗ ) } .

Since A h is a closed densely defined linear operator then G ( A h ) and V G ( A h ∗ ) are complementary orthogonal subspaces of the Hilbert space X × Y ( [

( z ,0 ) = ( x , A h x ) + ( − A h ∗ y , y ) , with x ∈ D ( A h ) , y ∈ D ( A h ∗ ) . (7)

Thus

z = x − A h ∗ y , 0 = A h x + y . (8)

Therefore, x ∈ D ( A h ∗ A h ) and x + A h ∗ A h x = z . Because of the uniqueness of decomposition (7), x is uniquely determined by z, and so the everywhere defined inverse ( I + A h ∗ A h ) − 1 exists.

In a similar way as above, the everywhere defined inverse ( I + A h A h ∗ ) − 1 exists. It follows from (8) that

A h ( I + A h ∗ A h ) − 1 = ( I + A h A h ∗ ) − 1 A h ,

that means A h A ⌣ h = A ^ h A h . Moreover, A ⌣ h , A ^ h are bouned operators and

‖ A ⌣ h ‖ ≤ 1, ‖ A ^ h ‖ ≤ 1.

( [

Theorem 2.2. If D ( A h A h ∗ A h ) = D ( A A ∗ A ) , ∀ h > 0 and x 0 ∈ D ( A A ∗ A ) , and α = α ( h , δ ) → 0 , δ 2 / α → 0 as h , δ → 0 , then { y Δ } converges to A x 0 .

Proof. Let ω = ( I + A h A h ∗ ) A h x 0 . Then A h x 0 = A ^ h ω . Since A h A ⌣ h = A ^ h A h (Lemma 2.1) and A h x 0 = A ^ h ω , we have

y h , α − A h x 0 = A h ( A ⌣ h − [ α I + ( 1 − α ) A ⌣ h ] ) ( α I + [ ( 1 − α ) A ⌣ h ] ) − 1 x 0 = α ( A ^ h − I ) T α ( A ^ h ) A ^ h ω .

Since ‖ T α ( A ^ h ) A ^ h ‖ ≤ 1 α and ‖ A ^ h − I ‖ ≤ 2 , for all h > 0 , we obtain

lim α → 0 y h , α = A h x 0 , ∀ h > 0.

On the other hand we have

‖ y Δ − y h , α ‖ 2 = 〈 A h A ⌣ h T α ( A ⌣ h ) ( x δ − x 0 ) , A h A ⌣ h T α ( A ⌣ h ) ( x δ − x 0 ) 〉 = 〈 A h ∗ A h A ⌣ h T α ( A ⌣ h ) ( x δ − x 0 ) , A ⌣ h T α ( A ⌣ h ) ( x δ − x 0 ) 〉 = 〈 ( I − A ⌣ h ) T α ( A ⌣ h ) ( x δ − x 0 ) , A ⌣ h T α ( A ⌣ h ) ( x δ − x 0 ) 〉 = ‖ I − A ⌣ h ‖ δ 2 α ,

since ‖ T α ( A ⌣ h ) ‖ ≤ 1 .

Hence

‖ y Δ − y h , α ‖ → 0, as α ( h , δ ) → 0, δ 2 α → 0.

We have

‖ y Δ − A x 0 ‖ ≤ ‖ y Δ − y h , α ‖ + ‖ y h , α − A h x 0 ‖ + ‖ A h x 0 − A x 0 ‖ ≤ ‖ y Δ − y h , α ‖ + ‖ y h , α − A h x 0 ‖ + h . (9)

It follows from (9) that

y Δ → A x 0 , as h , δ → 0.

The theorem is proved.

We shall call y Δ the approximate values of the operator A at x 0 .

Let X be a real strictly convex reflexive Banach space with the dual X ∗ be an E- space. Suppose that A : X → X ∗ is a hemi-continuous monotone operator from X into X ∗ with domain D ( A ) ⊂ X (possibly multi-valued) and y is a given element in X ∗ . We consider the following three problems

1) To solve the equation

A x = y , (1)

2) To solve the variational inequality

〈 A x − y , x − z 〉 ≥ 0, ∀ x ∈ D ( A ) , (2)

3) To compute values of the operator A at x 0 in X with x 0 given approximately.

These problems are important objects of investigation in the theory unstable problems. In [

As it is known [

〈 A x − y , x − x ¯ 〉 ≥ 0, ∀ x ∈ D ( A ) , (3)

where 〈 A x − y , x − x ¯ 〉 values of the linear functional A x − y at x − x ¯ .

We shall call x ¯ a generalized solution of Equation (1). We note that, if A is hemi-continuous and D ( A ) is open or everywhere dense in X, or if A is maximal monotone, then a generalized solution x ¯ coincides with the corresponding solution x ˜ , and (3) is equivalent to the inclusion y ∈ A x ¯ [

We now deal with the stable method of computing values of the operator A at x 0 when only the approximations A h , x δ as in (2) are given, where A h is also a hemi-continuous monotone operator from X into X ∗ with domain D ( A h ) = D ( A ) = X .

We denote the set values of A at x 0

R x 0 = { y ∈ X ∗ : y ∈ A x 0 } .

In X ∗ we consider the set

M x 0 = { y ∈ X ∗ | 〈 A x − y , x − x 0 〉 ≥ 0 , ∀ x ∈ X } ,

and we call M x 0 the set of generalized values of A at x 0 . It is easy to show that R x 0 ⊂ M x 0 .

Lemma 3.1. [

‖ y 0 ‖ = min y ∈ M x 0 ‖ y ‖ .

Under the above hypotheses, there exist the dual mappings

U : X → X ∗ , V : X ∗ → X ,

being strictly monotone, single-valued, homogeneous, hemi-continuous and such that

V U x = x , ∀ x ∈ X ; U V y = y , ∀ y ∈ X ∗ ,

(see [

We consider the equation

α A h x + U ( x − x δ ) = 0 , α > 0. (4)

The following theorem asserts the existence and uniqueness of generalized solution of (4).

Theorem 3.1. Under hypotheses as above, Equation (4) has a unique solution x Δ , for any Δ = ( h , δ , α ) .

Proof. Let A ˜ h be the maximal monotone extension of A h (such an extension exists by virtue of Zorn’s lemma). Therefore, the operator α A ˜ h x + U ( x − x δ ) is maximal monotone [

〈 α A h x + U ( x − x δ ) , x − x ˜ Δ 〉 ≥ 0, ∀ x ∈ X .

Thus, x ˜ Δ coincides with the generalized solution of Equation (2). Therefore, (2) has a unique solution x Δ = x ˜ Δ , for any Δ = ( h , δ , α ) . We now consider the sequence

y Δ = − U ( x Δ − x δ ) / α . (5)

The uniqueness of x Δ implies that y Δ is uniquely determined. It is easy to show that y Δ ∈ A ˜ h x Δ .

y Δ is call approximate value of A at x 0 for the given approximation ( A h , y δ ) .

Theorem 3.2. Under the stated assumption, if α ( h , δ ) → 0 , δ / α → 0 , as h , δ → 0 , then the sequence { y Δ } converges to the generalized value y 0 ∈ M x 0 of the operator A at x 0 .

Proof. By applying the dual mapping V : X ∗ → X to (5), we obtain

α V y Δ + ( x Δ − x δ ) − 0. (6)

Let M x 0 h denote the set of generalized values of A h at x 0 , i.e.

M x 0 h = { y h ∈ X ∗ | 〈 A h x − y h , x − x 0 〉 ≥ 0 , ∀ x ∈ X } .

By using [

〈 y Δ − y h , x Δ − x 0 〉 + 〈 y Δ − y h , x 0 − x δ 〉 + α 〈 y Δ − y h , V y Δ 〉 = 0 , ∀ y h ∈ M x 0 h . (7)

It is easy to show that ( x 0 , y h ) ∈ g r A ˜ h and hence

〈 y Δ − y h , x Δ − x 0 〉 ≥ 0. (8)

It follows from (7) and (8), that

〈 y Δ − y h , x 0 − x δ 〉 + α 〈 y Δ − y h , V y Δ 〉 ≤ 0,

implies

α ‖ V y Δ ‖ 2 − α ‖ y h ‖ ‖ V y Δ ‖ − ‖ y Δ − y h ‖ ‖ x 0 − x δ ‖ ≤ 0,

consequently

α ‖ y Δ ‖ 2 − ( α ‖ y h ‖ + δ ) ‖ y Δ ‖ − δ ‖ y h ‖ ≤ 0, ∀ y h ∈ M x 0 h . (9)

It follows from (9), that

‖ y Δ ‖ ≤ ‖ y h ‖ + 2 δ / α , ∀ y h ∈ M x 0 h .

In view of preceding remark and (2) we obtain

‖ y h − y ‖ ≤ h , ∀ y ∈ M x 0 , ∀ y h ∈ M x 0 h .

Hence,

‖ y Δ ‖ ≤ ‖ y ‖ + 2 δ / α + h , ∀ y ∈ M x 0 ,

implies

‖ y Δ ‖ ≤ ‖ y 0 ‖ + 2 δ / α + h , ∀ h , δ > 0. (10)

Since X ∗ is an E- space and from (10) and by using [

The theorem is proved.

As a simple concrete example of this type of approximation, consider differentiation in L 2 ( ℝ ) . That is, the operator A is defined on H 1 ( ℝ ) , the Sobolev space of functions possessing a weak derivative in L 2 ( ℝ ) by

A x = d x d t .

For a given data function x δ ∈ L 2 ( ℝ ) and a given data operator A h is defined on H 1 ( ℝ ) possessing a weak derivative in L 2 ( ℝ ) , by A h x δ = d x δ d t satisfying

‖ x δ − x 0 ‖ ≤ δ , ‖ A h x − A x ‖ ≤ h , ∀ x ∈ H 1 ( ℝ ) . (1)

The stabilized approximate derivative (3) is easily seen (using Fourier transform analysis) to be given by

y Δ ( s ) = ∫ − ∞ + ∞ σ α , h ( s − t ) x δ ( t ) , (2)

where the kernel σ α , h is given by

σ α , h ( t ) = − sign ( t ) 2 α exp ( − | t | / α ) . (3)

Then y Δ ( s ) in (2) is the approximate value of the operator A at x 0 for this method.

The author declares no conflicts of interest regarding the publication of this paper.

Van Kinh, N. (2020) On the Stable Method Computing Values of Unbounded Operators. Open Journal of Optimization, 9, 129-137. https://doi.org/10.4236/ojop.2020.94009