\documentclass[11 pt]{report}
\setlength{\oddsidemargin}{-.45in}
\setlength{\evensidemargin}{-.5in}
\setlength{\textwidth}{7.1in}
\setlength{\topmargin}{0 in}
\setlength{\textheight}{9.6in}
\renewcommand{\baselinestretch}{1.3}
\usepackage{graphics}
\usepackage{latexsym}
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{pstricks}
\usepackage{multicol}
\pagestyle{plain}
\pagenumbering{arabic}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[chapter]
\newtheorem{ex}[thm]{Exercise}
\newtheorem*{lemma}{Lemma}
\newtheorem*{R4.3}{Rudin 4.3}
\newtheorem*{R4.4}{Rudin 4.4}
\newtheorem*{R4.8}{Rudin 4.8}
\newtheorem*{R4.10}{Rudin 4.10}
\newtheorem*{R4.11}{Rudin 4.11}
\newtheorem*{R4.14}{Rudin 4.14}
\newtheorem*{R4.18}{Rudin 4.18}
\newtheorem*{R4.20}{Rudin 4.20}
\newtheorem*{R4.22}{Rudin 4.22}
\newtheorem*{R4.23}{Rudin 4.23}
\newtheorem*{R7.1}{Rudin 7.1}
\newtheorem*{R7.2}{Rudin 7.2}
\newtheorem*{R7.3}{Rudin 7.3}
\newtheorem*{R7.9}{Rudin 7.9}
\newtheorem*{R7.13}{Rudin 7.13}
\newtheorem*{R7.16}{Rudin 7.16}
\newtheorem*{R7.18}{Rudin 7.18}
%MATH CHARACTER COMMANDS
\newcommand{\be}{\begin{enumerate}} %Use with \item to create numbered and lettered lists; can nest
\newcommand{\ee}{\end{enumerate}}
\newcommand{\N}{\mathbb{J}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\ra}{\rightarrow}
\newcommand{\ds}{\displaystyle}
\newcommand{\ep}{\epsilon}
\newcommand{\litlet}{\renewcommand{\labelenumi}{(\alph{enumi})}}
%SPACING COMMANDS
\newcommand{\minispace}{\vspace{.25in}}
\newcommand{\response}{\vspace{.75in}}
\newcommand{\midresponse}{\vspace{1.5in}}
\newcommand{\longresponse}{\vspace{2.5in}}
\newcommand{\newpar}{\vspace{.2in} \noindent}
\begin{document}
\title{\huge Summer Math Institute Analysis}
\author{\huge Homework Solutions}
\date{ Summer 2007}
\maketitle
\tableofcontents
\chapter{Metric Spaces}
\section{Countable and Uncountable Sets}
\begin{ex}
\end{ex}
First note as a corollary to Theorem 1.2 we have the following statement (Corollary to Theorem 2.12 in Rudin):
Suppose $A$ is at most countable, and, for every $\alpha \in A$, $B_{\alpha}$ is at most countable. Put
$$T = \bigcup_{\alpha \in A} B_{\alpha}.$$
Then $T$ is at most countable.
So assume that $S \backslash K$ were countable. Then $S = K \cup S \backslash K$ would be the union of two countable sets and hence by the above corollary at most countable, contradicting the fact that $S$ is uncountable.
Note that by identical argument, the same result holds with the slightly weaker assumption that $K$ is at most countable.
\begin{ex}
\end{ex}
The result will follow from exercise 1.1 if we can show that $S$ is uncountable and $K$ is (at most) countable.
To show that $S$ is uncountable we use Cantor's diagonal process, mimicking the argument section 1.1.3 in the notes. Assume that $S$ were countable. Then we can arrange $S$ in a sequence $\{a_i\}_{i=1}^{\infty}$ of distinct elements. We then construct an element $a \in S$ such that $a \neq a_i$ for $i=1,2, \ldots$ by choosing the $n$th digit of $a$ such that it is not equal to the $n$th digit of $a_n$ ($n = 1,2, \ldots$). Hence $a$ and $a_n$ differ in at least one digit, so $a \neq a_n$ ($n=1,2, \ldots$). This contradicts the fact that $A = \{a_i\}_{i=1}^{\infty}$, therefore $S$ must be uncountable.
To show that $K$ is countable we partition $K$ into a countable number of finite sets. Define $K_n = \{r \in K : r=.s_1 s_2 s_3 \ldots, \mbox{ such that } s_n \neq 0 \mbox{ and } s_i = 0 \mbox{ for } i > n \} $. Let $K_0 := \{ .0000 \ldots \}$. Then $K = \cup_{n=0}^{\infty} K_n$. For each positive integer $n$, $K_n$ has $10^{n-1}*9$ elements (each of the first $n-1$ digits can take any value from 0 to 9, the $n$th digit must be from 1 to 9, and all the other digits must be zero). $K_0$ has exactly one element. Therefore each of the sets $K_n$ has a finite number of elements. Therefore, by the corollary to Theorem 1.2 noted in the previous exercise, $K$ is at most countable. ($K$ is in fact countable. Since each of the $K_n$ are disjoint $K$ is infinite).
Alternatively, we can show that $K$ is countable by showing that it is an (infinite) subset of the rational numbers, which we have already shown are countable. Let $r=.s_1 s_2 s_3 \ldots s_n 0 0 0 \ldots $ be any element of $K$. Then $r = m/p$ where $m= s_1 s_2 s_3 \ldots s_n$ and $p=10^{n}$, so $r \in \Q$.
We can avoid the need to show that $K$ is countable by making a slightly more careful diagonal process argument to show that $S \backslash K$ is uncountable. Assume that $S \backslash K$ were countable. Then we can arrange $S \backslash K$ in a sequence $\{a_i\}_{i=1}^{\infty}$ of distinct elements. We then construct an element $a \in S \backslash K$ such that $a \neq a_i$ for $i=1,2, \ldots$ by choosing the $n$th digit of $a$ such that it is equal to neither the $n$th digit of $a_n$ ($n = 1,2, \ldots$) nor zero. Hence $a$ and $a_n$ differ in at least one digit, so $a \neq a_n$ ($n=1,2, \ldots$). This contradicts the fact that $A = \{a_i\}_{i=1}^{\infty}$, therefore $S \backslash K$ must be uncountable.
\section{Metric Spaces: Definition and Examples}
\begin{ex}
\end{ex}
Let $(X,d_X)$ be the given metric space and let $d : X \times X \ra \R$ be the bounded metric on $X$ induced by $d_X$. Let $x,y,z \in X$
\begin{itemize}
\item Since $d_X$ is a metric, $d_X$ is a non-negative function, so $d$ is a non-negative function.
\item $d(x,y) = 0 \iff \min \{d_X (x,y) , 1 \} = 0 \iff d_X (x,y) = 0 \iff x=y$, with the last implication following since $d_X$ is a metric.
\item Since $d_X$ is a metric $d_X (x,y) = d_X (x,y)$. If $d_X (x,y) \leq 1$ then $d(x,y) = d_X (x,y) = d_X (y,x) = d(y,x)$. If $d_X (x,y) > 1$ then $d(x,y) = 1 = d(y,x)$.
\item If $d_X (x,z)$ and $d_X (z,y)$ are both less than or equal to one then using the triangle inequality for $d_X$ we have: $d(x,z) + d(z,y) = d_X (x,z) + d_X(z,y) \geq d_X (x,y) \geq d(x,y)$. \\
If either $d_X (x,z)$ or $d_X (z,y)$ is greater than one we have: $d(x,z) + d(z,y) \geq 1 \geq d(x,y)$.
\end{itemize}
Therefore $(X,d)$ is a metric space.
\begin{ex}
\end{ex}
The pair $(\R^2, d_{\frac{1}{2}})$ is not a metric space because it does not satisfy the triangle inequality. There are an infinite number of different sets of three points that can be used to show the failure of the triangle inequality. For one example, take $x = (0,0)$, $y=(4,0)$ and $z=(4,1)$. Then $d_{\frac{1}{2}} (x,y) = 4$, $d_{\frac{1}{2}} (y,z) = 1$ and $d_{\frac{1}{2}} (x,z) = 9$, so $d_{\frac{1}{2}} (x,y) + d_{\frac{1}{2}} (y,z) = 5 \not\geq 9 = d_{\frac{1}{2}} (x,z)$.
\begin{ex}
\end{ex}
Let $d$ be the uniform metric on $C([a,b])$. We are assuming without proof that the maximum $\ds \max_{z \in [a,b]} |f(z) -g(z)|$ is achieved on $[a,b]$ for all $f,g \in C([a,b])$. Let $f,g,h \in C([a,b])$.
\begin{itemize}
\item $d(f,g)$ is non-negative since it is a maximum of the non-negative values $|f(z) -g(z)|$ for $z \in [a,b]$.
\item $\ds d(f,g) = 0 \iff \max_{z \in [a,b]} |f(z) -g(z)| = 0 \iff |f(z) - g(z)| = 0 \ \ \forall z \in [a,b] \iff f=g$.
\item For any $z \in [a.b]$, $|f(z) - g(z)| = |g(z) - f(z)|$.\\
\noindent Therefore $\ds d(f,g) = \max_{z \in [a,b]} |f(z) -g(z)| = \max_{z \in [a,b]} |g(z) -f(z)| = d(g,f)$.
\item Triangle Inequality:
\begin{align}
\nonumber d(f,g) &= \max_{z \in [a,b]} |f(z) - g(z)| \\
\nonumber &= \max_{z \in [a,b]} |f(z) - h(z)+ h(z) - g(z)| \\
&\leq \max_{z \in [a,b]} \left( |f(z) -h(z)| + |h(z) - g(z)| \right)
\end{align}
The last inequality follows from the Triangle inequality for the Euclidean metric on $\R^1$. Let $t \in [0,1]$ be a point where the maximum (1.1) is achieved. Then we have
\begin{align*}
d(f,g) &\leq |f(t) -h(t)| + |h(t)-g(t)| \\
&\leq \max_{z \in [a,b]} |f(z) - h(z)| + \max_{z \in [a,b]} |h(z) -g(z)| \\
&= d(f,h) + d(h,g)
\end{align*}
\end{itemize}
Therefore $(C([a,b]),d)$ is a metric space.
\begin{ex}
\end{ex}
Since a taxi diver can only travel directly north-south or east-west, for a driver the distance between two points is the sum of the difference in their $x$-coordinates and the difference in their $y$-coordinates. Therefore a taxi driver is interested in the $l^1$ metric on the space $\R^2$: $d_1(x,y) = |x_1-y_1| + |x_2 -y_2|$. This is sometimes called the 'taxi-cab metric.'
We show that $(\R^2, l^1)$ does in fact satisfy the properties of a metric space. Let $x,y,z \in \R^2$.
\begin{itemize}
\item Since $d_1 (x,y)$ is the sum of two absolute values it is non-negative.
\item $d_1 (x,y) = 0 \iff |x_1 - y_1| + |x_2 - y_2| = 0 \iff |x_1 - y_1| = |x_2 - y_2| = 0 \iff x_1 = y_1 \mbox{ and } x_2 = y_2 \iff x=y$.
\item $d_1$ is symmetric because $|x_1 - y_1| = |y_1 - x_1|$ and $|x_2 - y_2| = |y_2 - x_2|$.
\item The triangle inequality for $d_1$ follows from the triangle inequality for $\R^1$ with the Euclidean metric.
\begin{align*}
d(x,y) &= |x_1 - y_1| + |x_2 - y_2| \\
&\leq \left( |x_1 - z_1| + |z_1 - y_1| \right) + \left( |x_2 - z_2| + |z_2 - y_2| \right) \\
&= \left( |x_1 - z_1| + |x_2 - z_2| \right) + \left( |z_1 - y_1| + |z_2 - y_2| \right) \\
&= d(x,z) + d(z,y)
\end{align*}
\end{itemize}
Therefore $(\R^2,l^1)$ is a metric space.
\section{Sequences}
\begin{ex}
\end{ex}
Let $\{s_n\}$ be a convergent sequence in $\C$ and let $s$ be the limit of $\{s_n\}$. Fix $\epsilon > 0$. Then there exists an $N \in \N$ such that for all $n > N$, $|s_n-s| \leq \epsilon$. By the triangle inequality, for all $n \in \N$ we have
\begin{align*}
&|s_n - 0| \leq |s_n-s| + |s-0| \ \ \Rightarrow \ \ |s_n|-|s| \leq |s_n-s| \\
&|s - 0| \leq |s_n-s| + |s_n-0| \ \ \Rightarrow \ \ |s|-|s_n| \leq |s_n-s|
\end{align*}
Combining these two inequalities, for $n>N$ we have
$$\left| |s_n| - |s| \right| \leq |s_n -s| \leq \epsilon$$
Since $\epsilon$ was arbitrary, this implies $\{|s_n|\} \ra |s|$, as desired.
The converse statement does not hold in general. For example, consider the sequence $s_n = (-1)^n$. In this case $\{|s_n|\}$ converges to 1 while $\{s_n\}$ diverges.
\begin{ex}
\end{ex}
Let $X$ be a space with the Hausdorff property. Let $\{x_n\}$ be a sequence in $X$ that converges to a point $x \in X$. Assume that $\{x_n\}$ also converges to a point $y \in X$. Fix any neighborhood $N(x)$ of $x$ and any neighborhood $N(y)$ of $y$. By definition of convergence, $N(x)$ and $N(y)$ each contain all but a finite number of points of $\{x_n\}$. In particular, $N(x)\backslash N(y)$ contains a finite number of points. Therefore $N(x) \backslash (N(x) \backslash N(y) ) = N(x) \cap N(y)$ must contain an infinite number of points. Since this holds for all neighborhoods $N(x), N(y)$, by the Hausdorff property we must have $x=y$.
Alternatively, we can assume by way of contradiction that $\{x_n\}$ converges to a point $y \in X$ such that $y \neq x$. Then by the Hausdorff property we know that there exist neighborhoods $N(x)$ of $x$ and $N(y)$ of $y$ such that $N(x) \cap N(y) = \emptyset$. Since $N(x)$ contains all but a finite number of the elements of $\{x_n\}$, $N(y)$ must contain only a finite number of the elements of $\{x_n\}$, contradicting the fact that $\{x_n\}$ converges to $y$. Therefore $\{x_n\}$ converges to $x$ only.
\begin{ex}
\end{ex}
Let $\{ p_n \}$ be a Cauchy sequence in a metric space $(X,d)$ with a subsequence $\{ p_{n_i} \}$ that converges to a point $p \in X$.
Fix any $\ep > 0$.
Since $\{ p_n \}$ is Cauchy there exists an $N_1 \in \N$ such that for all $n,m > N_1$, $d(p_n, p_m) < \ep/2$.
Since $\{ p_{n_i} \} \ra p$ there exists an $I \in \N$ such that for all $i > I$, $d(p_{n_i}, p) < \ep/2$.
Let $N_2 = n_I$. Let $N = \max \{ N_1, N_2\}$. Fix $j \in \N$ such that $n_j > N$.
Then for all $n>N$, by the triangle inequality we have
$$d(p_n,p) \leq d(p_n,p_{n_j}) + d(p_{n_j} , p) < \ep/2 + \ep/2 = \ep.$$
Since $\epsilon$ was arbitrary, this implies $\{p_n\} \ra p$, as desired.
\break
\begin{ex}
\end{ex}
\be
\item We will prove by induction on $n$ that $x_n \in \Q_{>0}$. The base case is $x_1 = 2 \in \Q_{>0}$. For the inductive step assume that $x_n \in \Q_{>0}$. So $x_n = m/p$ where $m,p \in \N$.
$$x_{n+1} = \frac{1}{2} \left( \frac{m}{p} + \frac{2}{m/p} \right)
=\frac{1}{2} \left( \frac{m^2 + 2p^2}{mp} \right)
=\frac{m^2 + 2p^2}{2mp} \in \Q_{>0}$$
This completes the inductive step.
\item
$$x_{n+1} -\sqrt{2} = \frac{1}{2} \left( x_n + \frac{2}{x_n} \right) - \sqrt{2}
=\frac{x_n^2 + 2 - 2 \sqrt{2} x_n}{2x_n} = \frac{(x_n- \sqrt{2})^2}{2x_n}.$$
To show that $x_{n+1} > \sqrt{2}$ use induction on $n$. The base case is $x_1 = 2 > \sqrt{2}$. For the inductive step assume that $x_n > \sqrt{2}$. Then we have $x_n >0$ (by part 1) and $(x_n - \sqrt{2})^2 > 0$, so the above equality implies $x_{n+1} - \sqrt{2} > 0 \ \ \Rightarrow \ \ x_{n+1} > \sqrt{2}$, completing the inductive step.
\item To show that $x_{n+1} < x_n$, we will show the equivalent statement $x_n-x_{n+1} > 0$.
$$x_n - x_{n+1} = x_n - \frac{1}{2} \left(x_n+ \frac{2}{x_n} \right) = \frac{x_n^2 - 2 }{2x_n} > 0.$$
The last inequality follows from the fact that $x_n > \sqrt{2}$, as shown in part 2.
\item Let $\rho = \frac{x_1-\sqrt{2}}{2\sqrt{2}}$. Then for $n \in \N$
\begin{align*}
|x_{n+1} - \sqrt{2}| &= \left| \frac{ (x_n - \sqrt{2})^2}{2 x_n} \right| \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \bullet \mbox{ By part 2.}\\
&= \frac{|x_n - \sqrt{2}|}{|2 x_n|} \cdot |x_n - \sqrt{2}| \\
&< \frac{(x_n - \sqrt{2})}{|2 \sqrt{2}|} \cdot |x_n - \sqrt{2}| \ \ \ \ \ \ \ \bullet \mbox{ By part 2, $x_n > \sqrt{2}$.} \\
&\leq \frac{ (x_1 - \sqrt{2})}{2 \sqrt{2}} \cdot |x_n - \sqrt{2}| \ \ \ \ \ \ \ \bullet \mbox{ By part 3, $x_1 \geq x_n$.} \\
&= \rho |x_n - \sqrt{2}|
\end{align*}
\item Proof by induction on $n$. The base case $n=0$ results in equality since $\rho^0=1$. For the inductive step, assume that $|x_{n} - \sqrt{2}| \leq \rho^{n-1} |x_1 - \sqrt{2}|$. Combining this with the result from part 4 we have
$$|x_{n+1} - \sqrt{2}| \leq \rho |x_n - \sqrt{2}| \leq \rho \left( \rho^{n-1} |x_1 - \sqrt{2}| \right) = \rho^{n} |x_1 - \sqrt{2}|.$$
\item Note that $0 < \rho <1$. Therefore, $\lim_{n \ra \infty} \rho^n = 0$ (Rudin Theorem 3.20 (e)). This implies that $\lim_{n \ra \infty} \rho^n |x_1-2| = 0$ (Rudin Theorem 3.3(b)). Combining this result with part 5 and the fact that $|x_{n+1}-\sqrt{2}| \geq 0$ for all $n \in \N$ we see that $\lim_{n \ra \infty} |x_{n+1} - \sqrt{2}| = 0$ (See the remark preceding Rudin Theorem 3.20). So $\lim_{n \ra \infty} x_{n+1} = \sqrt{2}$.
The sequence $\{x_n\}$ therefore has a limit in $\R$ but not a limit in $\Q$ (Since our metric on $\Q$ is the restriction to $\Q$ of our metric on $\R$, if $q \in \Q$ were a limit of $\{x_n\}$ then $q \neq \sqrt{2}$ would also be a limit for $\{x_n\}$ in $\R$, contradicting the uniqueness of limits in $\R$ (Rudin theorem 3.2 (b))).
$\{x_n\}$ is a convergent sequence in $\R$, so $\{x_n\}$ is Cauchy in $\R$ (Theorem 1.5). Since the metric we are using on $\Q$ is just the restriction to $\Q$ of the metric we are using on $\R$, this implies the $\{x_n\}$ is Cauchy in $\Q$ as well.
\item We have demonstrated the existence of a Cauchy sequence in $\Q$ which does not converge in $\Q$, therefore $\Q$ is not complete.
\ee
\begin{ex}
\end{ex}
Let $X = C([0,1])$ equipped with the uniform metric. Let $\{f_n\}$ be a sequence in $X$ that converges to a function $f \in X$. Fix any $x \in [0,1]$ and any $\ep > 0$. Since $\{f_n\}$ converges to $f$ there exists an $n \in \N$ such that for all $n>N$, $d(f_n,f) < \ep$. From the definition of the uniform metric, for $n>N$ we have $\max_{z \in [0,1]} |f_n (z) - f(z)| < \ep$. So in particular, for $x \in [0,1]$ and $n>N$ we have $|f_n(x) - f(x)| < \ep$. Since $\ep$ was arbitrary, this implies that $\lim_{n \ra \infty} f_n(x) = f(x)$. Convergence in the space $C([0,1])$ with the uniform metric implies point-wise convergence.
\begin{ex}
\end{ex}
\begin{multicols}{2}
Fix any $x \in [0,1]$. If $x=0$, then $f_n(x) = 0$ for all $n \in \N$, so $\lim_{n \ra \infty} f_n(x) = 0$. If $x \neq 0$, fix $N \in \N$ such that $N > 2/x$. Then for all $n>N$, $x > 2/N > 2/n$, so $f_n(x) = 0$. Therefore $\lim_{n \ra \infty} f_n(x) = 0$.
The sequence $\{f_n\}$ does not converge to $f$ in $C([0,1])$ equipped with the uniform metric. For all $n \in \N$, $1/n \in [0,1]$ and we have $|f_n (1/n) - f(1/n)| = |1-0|=1$. Therefore $d(f_n,f) = \max_{z \in [0,1]} |f_n (z) - f(z)| \geq 1$ for all $n \in \N$, so $\{f_n\}$ can not converge to $f$. This shows that the converse of Exercise 1.11 does not hold; point-wise convergence does not imply convergence under the uniform metric.
\begin{pspicture}(-1,-1)(5,6)
\psline(0,-1)(0,5)
\psline(-1,0)(5,0)
\psline[linewidth=0.1,dotsize=.2, arrows=*-*](0,0)(1,4)(2,0)(4,0)
\psline(-.1,4)(.1,4)
\psline(1,-.1)(1,.1)
\psline(2,-.1)(2,.1)
\psline(4,-.1)(4,.1)
\rput[r](-.1,4){1}
\rput[t](1,-.15){1/n}
\rput[t](2,-.15){2/n}
\rput[t](4,-.15){1}
\rput[tr](-.15,-.15){0}
\rput[ll](1.65,2.15){$f_n (x)$}
\end{pspicture}
\end{multicols}
\begin{ex}
\end{ex}
Let $\{x_n\}$ be a nondecreasing sequence in $\R$. Assume that $\{x_n\}$ is not Cauchy. Then there exists an $\ep > 0$ such that for all $N \in \N$ there are $m,n > N$ such that $|x_n - x_m| > \ep$.
Therefore we can find $m_1, n_1 \in \N$ such that $m_1 < n_1$ and $|x_{n_1} - x_{m_1}| > \ep$. Inductively we will construct a sequence of such pairs of points. Assume that we have already fixed integers $\{m_i, n_i\}_{i=1}^k$ such that
$$m_1 < n_1 < m_2 < n_2 < \ldots < m_k < n_k$$
and $|x_{n_i} - x_{m_i}| > \ep$ for $i=1,2, \ldots ,k$. Then by our choice of $\ep$, we know that there exist $m_{k+1} ,n_{k+1} > m_k$ such that $|x_{n_{k+1}} - x_{m_{k+1}}| > \ep$. By relabeling if needed we can assume $m_{k+1} < n_{k+1}$. This completes the inductive step of our construction.
Since $\{x_n\}$ is nondecreasing we know that
$$x_{m_1} < x_{n_1} \leq x_{m_2} < x_{n_2} \leq x_{m_3} < x_{n_3} \leq \ldots. $$
Then for any $k \in \N$ we have
$$x_{n_k} - x_{m_1} \geq \sum_{i=1}^{k} \left( x_{n_i} - x_{m_i} \right) > \sum_{i=1}^{k} \ep = k \ep$$
Since this holds for all $k \in \N$, the sequence $\{x_n\}$ is unbounded. So by contrapositive, if $\{x_n\}$ is bounded it must be Cauchy.
Since $\R$ is complete, any Cauchy sequence in $\R$ converges. Therefore a bounded nondecreasing sequence $\{x_n\}$ in $\R$ must converge.
For an alternative proof of this fact using the idea of a least upper bound, see Rudin Theorem 3.14.
\begin{ex}
\end{ex}
Let $\{a_n\} \in \R$ be a sequence such that $\lim_{n \ra \infty} |a_n|^{1/n} = \alpha < 1$. Fix any number $x \in (\alpha,1)$. Then there exists an $n \in \N$ such that for all $n > N$, $|a_n|^{1/n} < x$. So for $n>N$ we have $|a_n| < x^n$. Since $0 N$, $\left| \frac{a_{n+1}}{a_n} \right| < x$. So for $n>N$ we have $|a_{n+1}| < x |a_n|$. Applying this repeatedly we have for any $n > N$
$$|a_n| < x |a_{n-1}| < x^2 |a_{n-2}| < \cdots < x^{n-N} |a_N| = x^n x^{-N} |a_N|.$$
As noted above, $\sum x^n$ converges. Since $x^{-N} |a_N|$ is just a constant, the series $\sum x^n x^{-N} |a_N|$ also converges. Then by the comparison test, $\sum a_n$ converges.
\section{Topology of Metric Spaces}
\begin{ex}
\end{ex}
\begin{multicols}{2}
Let $x$ be a limit point of $E'$. Fix any $\ep > 0$. We want to show that there is a point $z \in N_{\ep}(x)\backslash \{x\}$ such that $z \in E$. Since $x$ is a limit point of $E'$ there exists a point $y \in N_{\ep/2}(x)$ such that $y \in E'$ and $y \neq x$. Let $\delta = d(x,y)$. Then $N_{\delta}(y)$ is a neighborhood of $y$ contained in $N_{\ep}(x) \backslash \{x\}$. Since $y \in E'$, $y$ is a limit point of $E$, so there exists a point $z \in E$ such that $z \in N_{\delta} (y) \subset N_{\ep}(x) \backslash \{x\}$.
\begin{pspicture}(-4,-2.55)(2.55,2.55)
\pscircle[linestyle=dashed](0,0){2.5}
\pscircle[linestyle=dashed](0,0){1.25}
\psdot(0,0)
\psdot(-.6,.8)
\pscircle[linestyle=dashed](-.6,.8){1}
\psdot(-.6,1.5)
\psline(0,0)(0,-2.5)
\psline(0,0)(-.6,.8)
\psline(0,0)(1.25,0)
\rput[b](.7,.1){$\ep/2$}
\rput[tl](.1,-.1){x}
\rput[l](.1,-1.5){$\ep$}
\rput[tr](-.35,.35){$\delta$}
\rput[bl](-.5,.85){y}
\rput[r](-.7,1.5){z}
\end{pspicture}
\end{multicols}
Since this holds for all $\ep > 0$, $x$ is a limit point of $E$, so $x \in E'$. Therefore $E'$ contains all of its limit points, so $E'$ is closed.
It is possible for a limit point of $E$ to be an isolated point of $E'$. For example, let $E = \{1/n\}_{n=1}^{\infty} \in \R$. Then $E'=\{0\}$ which has no limit points.
\begin{ex}
\end{ex} We begin by showing that $d$ is a metric. Positivity follows directly from the definition of $d$. The symmetry of $d$ follows from the symmetry of the relations $p=q$ and $p \neq q$. For the triangle inequality, let $p,q,r \in X$. If $p=q$, then since $d$ is a non-negative function we have
$$d(p,q) = 0 \leq d(p,r) + d(r,q).$$
In the case where $p \neq q$, by transitivity we must have either $p \neq r$ or $q \neq r$ (or both), so
$$d(p,q) = 1 \leq d(p,r) + d(r,q).$$
Therefore $d$ is a metric on $X$. Note that the neighborhoods $N_{\ep} (x)$ in this metric are either the point $x$ (when $\ep < 1$) or all of $X$ (when $\ep >1$).
Let $S$ be any subset of $X$ and let $p$ be any point of $S$. Then $N_{1/2} (p) = \{p\} \subseteq S$. Therefore $p$ is an interior point of $S$. Since this holds for all $p \in S$, $S$ is an open set.
Since any subset of $X$ is open, the complement of any subset of $X$ is open, so any subset of $X$ is also closed (Theorem 1.8). Alternatively, we can show that any subset of $X$ is closed by noting that any subset of $X$ has no limit points. This follows since any point $p \in X$ has a neighborhood $N_{1/2}(p) = \{p\}$ that contains no points of $X$ except for $p$ itself.
The compact subsets of $X$ are those subsets with a finite number of points.
\begin{itemize}
\item Let $E \subseteq X$ be a subset containing a finite number of points. Given any open cover of $E$ we know that for each point $x \in E$ we can find a set $G_x$ from our cover such that $x \in G_x$. Then $\{G_x\}_{x \in E}$ forms a finite sub-cover of $E$.
\item Let $E \subseteq X$ be a subset containing a infinite number of points. Form an open cover of $E$ by taking $\{N_{1/2}(x)\}_{x \in E} = \{x\}_{x \in E}$. Since $E$ is infinite this is an infinite collection of sets and removing any set $N_{1/2}(x)$ from the collection results in the point $x \in E$ no longer being an element of any set in the collection. Therefore no finite sub-cover exists, so $E$ is not compact.
\end{itemize}
\begin{ex}
\end{ex}
For $n \in \N$ let $G_n := (1/n,1)$. Then each of the $G_i$ is an open set in $(0,1)$. For any $x \in (0,1)$, $x \in G_n$ for all $n > 1/x$. Therefore $\cup_{n=1}^{\infty} G_n = (0,1)$, so $\{G_n\}_{n=1}^{\infty}$ is an open cover of $(0,1)$. However, if we take any finite subset $I \subset \N$ and let $N$ be the largest element of $I$, then $1/2N \notin \cup_{n \in I} G_n$. So $\{G_n\}_{n=1}^{\infty}$ has no finite sub-cover of (0,1).
\vbox{
\begin{ex}
\end{ex}
We can consider $\Q$ as a subset of $\R$ with the metric on $\Q$ the restriction to $\Q$ of the usual metric on $\R$. The set $E$ is then $\left[(- \sqrt{3},- \sqrt{2}) \cup (\sqrt{2},\sqrt{3}) \right] \cap \Q$. For $x \in \Q$, define $\delta_x = \min\{|x-\sqrt{2}|, |x-\sqrt{3}|,|x+\sqrt{2}|,|x+\sqrt{3}|\}$. Since $\sqrt{2}, \sqrt{3} \notin \Q$, $\delta_x > 0$ for all $x \in \Q$.}
To show that $E$ is closed, let $x \in \Q\backslash E$. Then $N_{\delta_x} (x) \cap E = \emptyset$. Since this holds for all $x \in \Q \backslash E$ we have that $\Q \backslash E$ is open in $\Q$, so $E$ is closed in $Q$.
Similarly, to show that $E$ is open, consider any $x \in E$. Then $N_{\delta_x} (x) \subset E$, so every point of $E$ is an interior point of $E$.
The set $E$ is bounded since $E \subseteq N_{\sqrt{3}}(0)$.
To show that $E$ is not compact, consider the cover of $E$ given by $\{G_n\}_{n=3}^{\infty}$ where $G_n := (-\sqrt{3},-\sqrt{2}) \cup (\sqrt{2}, \sqrt{3} - 1/n)$. To show that each of the $G_n$ are open we use the same argument as was used to show that $E$ is open, but with $\delta_{x,n} = \min\{|x-\sqrt{2}|, |x-\sqrt{3}+1/n|,|x+\sqrt{2}|,|x-\sqrt{3}|\}$. The argument that $\{G_n\}_{n=3}^{\infty}$ is a cover and has no finite sub-cover is identical to the argument in problem 1.17 except that instead of choosing $\sqrt{3}-1/2N$ as the element not in our sub-cover we must choose some rational number in $(\sqrt{3}-1/N,\sqrt{3})$. Since $\Q$ is dense in $\R$ we know that such a rational number exists.
\begin{ex}
\end{ex}
Let $S \subseteq \R^n$ be an open and closed nonempty set and let $S^c$ be non-empty. So we can find points ${\bf p},{\bf q} \in \R^n$ such that ${\bf p} \in S$ and ${\bf q} \in S^c$. Recall that the line segment between ${\bf p}$ and ${\bf q}$ in $\R^n$ is given by the set of convex combinations $t {\bf p} + (1-t) {\bf q}$ where $t \in [0,1]$. So let $A$ be the set of real numbers given by
$$A := \{ t \in [0,1] : t {\bf p} + (1-t) {\bf q} \in S \}$$
$A$ in non-empty since $0 \in A$ and $A$ has an upper bound of 1, so by the least upper bound property of the real numbers $\sup A$ exists. Let $x := \sup A$ and define ${\bf r} = x {\bf p} + (1-x) {\bf q}$.
First consider the case ${\bf r} \in S$. By assumption ${\bf q} \in S^c$, so $x \in [0,1)$. Fix any $\ep >0$ and then choose $s \in (x,1)$ such that $|s-x|< \frac{\ep}{d({\bf p}, {\bf q})}$. Define ${\bf u} = s {\bf p} + (1-s) {\bf q}$. Then
$$d({\bf u},{\bf r}) = |{\bf u} - {\bf r}|
= |s {\bf p} + (1-s) {\bf q} - (x {\bf p} + (1-x) {\bf q})|
= |(s-x) {\bf p} - (s-x) {\bf q} |
= |(s-x)| |{\bf p}- {\bf q}|
\leq \ep .$$
Therefore ${\bf u} \in N_{\ep}(x)$. However, since $s>x$, ${\bf u} \notin S$. Since $\ep$ was arbitrary we have that no neighborhood of ${\bf r}$ is contained in $S$, so $S$ is not open.
Second consider the case ${\bf r} \in S^c$. By assumption ${\bf p} \in S$, so $x \in (0,1]$. So we can repeat the argument from the above case but now choosing $s \in (0,x)$. Since $s0$, $N_{\ep} ({\bf x}) \supset I_{\ep/k} ({\bf x})$. To see this, let ${\bf y} \in I_{\ep/k} ({\bf x})$. Then for $i=1,2,\ldots,k$, $|y_i-x_i| \leq \ep/2k$. Therefore
$$d({\bf x},{\bf y}) = \sqrt{ \sum_{i=1}^k (x_i-y_i)^2 }
\leq \sqrt{ \sum_{i=1}^k \left( \frac{\ep}{2k} \right)^2}
= \sqrt{\frac{\ep^2}{4k}}
= \frac{\ep}{2 \sqrt{k}}
\leq \ep,$$
with the last inequality following since $k \geq 1$. So ${\bf y} \in N_{\ep} ({\bf x})$, as desired.
Also note that since $d({\bf x}, {\bf y}) \geq \max \{ |x_i - y_i| \}_{i=1}^k$, for any $\ep > 0$ we have $N_{\ep} ({\bf x}) \subset I_{2 \ep} ({\bf x})$.
Now let $S$ be any bounded set in $\R^k$. So there exists an $M > 0$ such that $S \subseteq B_{M} (0)$. Fix any $\ep > 0$.
Let $E= \{ n \ep /k : n \in \Z, \frac{(|n|-1) \ep}{ k} \leq M\}$. $E$ is a rectangular lattice of points in $\R^k$ with a spacing of $\ep/k$ units between points, restricted to a $k$-cell of side length slightly larger than $2M$ centered at the origin. So $E$ is a finite set. Then consider
\begin{equation}{\label{Eq:cellunion}}
\bigcup_{x_i \in E} I_{\ep/k} ((x_1, x_2, \ldots , x_k)) \subset \bigcup_{x_i \in E} N_{\ep} ((x_1, x_2, \ldots ,x_k)).
\end{equation}
The union of $k$-cells in (\ref{Eq:cellunion}) completely covers the $k$-cell of side length $2M$ centered at the origin and hence also covers $S \subseteq B_M ({\bf 0}) \subset I_{2M}({\bf 0})$. Therefore the union of $\ep$-neighborhoods in (\ref{Eq:cellunion}) also covers $S$. Since $E$ is finite and $k$ is finite, the union of $\ep$-neighborhoods in (\ref{Eq:cellunion}) is finite. Therefore we have the desired finite covering by $\ep$-neighborhoods.
This shows that $(\R^k,d)$ is a connected space (see Rudin definition 2.45).
\begin{ex}
\end{ex}
Let $E$ be a compact subset of a metric space $(X,d)$. Fix any $\ep > 0$. Then $\{N_{\ep} (x) \}_{x \in E}$ gives an open cover of $E$. Since $E$ is compact there must be a finite sub-cover, so there must exist a finite set of points $\{x_i\}_{i=1}^n$ such that $\{N_{\ep} (x_i)\}_{i=1}^n$ covers $E$. This gives the desired finite cover of $E$ by $\ep$-neighborhoods. Since this holds for all $\ep >0$, $E$ is totally bounded.
Note that this theorem can be used to give a non-constructive justification for the previous exercise.
\begin{ex}
\end{ex}
Let $(X,d)$ be a metric space. Let $E \subseteq X$ be a set such that every infinite subset of $E$ has a limit point in $E$. Let $\ep_n := 1/n$. Fix any $k \in \N$. Fix any point $x_{k,1} \in E$. Having chosen $x_{k,1}, x_{k,2}, \ldots , x_{k,j} \in E$, choose $x_{k,j+1} \in E$, if possible, such that $d(x_{k,j+1}, x_{k,i}) \geq \ep_k$ for $i=1,2,\ldots ,j$. Note that by the triangle inequality, if we have two distinct points $x_{k,l},x_{k,m} \in N_{\frac{1}{2} \ep_k} (x)$ for any point $x \in E$ then $d(x_{k,l},x_{k,m}) < \ep_k$, contradicting the construction of the $x_{k,i}$. Therefore, for each $x \in E$ we have a neighborhood of $x$ that contains a finite number of the points in $\{x_{k,i}\}$, so $x$ is not a limit point of $\{x_{k,i}\}$. Since this holds for all $x \in E$, $\{x_{k,i}\}$ has no limit points in $E$. Therefore, by our assumption about the set $E$, $\{x_{k,i}\}$ must have a finite number of elements. Let $N_k$ be the size of the set $\{x_{k,i}\}$. Then consider the set:
\begin{equation*}
D := \bigcup_{n \in \N} \{ x_{n,i} \}_{i=1}^{N_n}.
\end{equation*}
I claim that $D$ is a countable dense subset of $E$. As a countable union of finite sets $D$ is at most countable. Fix any point $x \in E$ and any $\ep > 0$. Then fix $k \in N$ such that $\ep_k = 1/k < \ep$. Then there exists a point $x_{k,i} \in D$ such that $d(x_{k,i},x) < \ep_k < \ep$ or else $x$ would have been added to the set $\{x_{k,x}\}_{i=1}^{N_k}$. Since this holds for any $\ep > 0$, $x$ is a limit point of $D$. Since this holds for all $x \in E$, the set $D$ is dense in $E$.
A subset $E$ of a metric space is {\it separable} if $E$ contains a countable dense subset. So if every infinite subset of a set $E$ has a limit point in $E$ then $E$ is separable. The converse of this statement is not true (consider $E=\R$). Note that the idea of not being connected is NOT the same being separable.
\begin{ex}
\end{ex}
I claim that the sequence $\{x_n\}_{n=1}^{\infty}$ is Cauchy. Fix any $\ep > 0$. Then fix $N \in \N$ such that $1/N < \ep/2$. For $n,m > N$, $x_n \in G_n\subseteq G_N$ and $x_m \in G_m \subseteq G_N$. Since $G_N$ has radius less than or equal to $1/N$, by the triangle inequality $d(x_n,x_m) < 2/N < \ep$ for all $n,m > N$. Since $\ep$ was arbitrary, this shows that $\{x_n\}_{n=1}^{\infty}$ is Cauchy.
Since $X$ is complete, $\{x_n\}_{n=1}^{\infty}$ converges to some $x \in X$. Since $\{x_n\}_{n=1}^{\infty} \subseteq \mathcal{E}$, $x$ is a limit point of $\mathcal{E}$. Since $\mathcal{E}$ is closed, this implies that $x \in \mathcal{E}$. Since $\{x_n\}_{n=1}^{\infty} \subseteq K$, $x$ is also a limit point of $K$. Therefore $K$ has a limit point in $\mathcal{E}$, as desired.
\chapter{Continuity}
\section{Properties of Continuous Functions}
\begin{R4.3}
\end{R4.3}
Since $\{0\}$ is closed in $\R$, $f$ continuous implies that $Z(f) = f^{-1} (0)$ is closed in $X$ (Corollary 4.8).
\begin{R4.4}
\end{R4.4}
Fix any point $p \in X \backslash E$. Then since $E$ is dense in $X$, $p$ is a limit point of $E$. So there exists a sequence $\{p_n\}_{n=1}^{\infty} \subseteq E$ that converges to $p$ in $X$. By Theorem 4.6 we know that $f(p) = \lim_{x \ra p} f(x)$. Therefore, by Theorem 4.2 we have $f(p) = \lim_{n \ra \infty} f(p_n)$. Therefore, since $\{f(p_n)\}_{n=1}^{\infty} \subseteq f(E)$, $f(p)$ is either in $E$ or a limit point of $E$. Since this holds for all $p \in X \backslash E$ we have that $f(E)$ is dense in $f(X)$.
Now assume $f(x) = g(x)$ for all $x \in E$. By the above argument, for any $p \in X \backslash E$ we have a sequence $\{p_n\}_{n=1}^{\infty} \subseteq E$ that converges to $p$ with $f(p) = \lim_{n \ra \infty} f(p_n)$ and $g(p) = \lim_{n \ra \infty} g(p_n)$ . Since $\{p_n\}_{n=1}^{\infty} \subseteq E$, $f(p_n) = g(p_n)$ for all $n \in \N$. Therefore $f(p) = \lim_{n \ra \infty} f(p_n) = \lim_{n \ra \infty} g(p_n) = g(p)$. So we have $f(x) = g(x)$ for all $x \in X$.
\begin{ex}
\end{ex}
Let $(X, d_X)$ and $(Y,d_Y)$ be metric spaces with $d_X$ the discrete metric on $X$. Let $f: X \ra Y$ be any function. Fix any $x \in X$ and any $\ep > 0$. Then $N_{1/2}(x) = \{x\} \subseteq N_{\ep}(f(x))$. Therefore $f$ is continuous. So all functions whose domain space is equipped with the discrete metric are continuous.
Alternatively, by Exercise 1.16 we know that all subsets of $X$ are open. Therefore, for any open set $S \subseteq Y$, $f^{-1} (S) \subseteq X$ is open, so $f$ is continuous.
\section{Uniform Continuity}
\begin{R4.8}
\end{R4.8}
Let $f$ be a real uniformly continuous function on a bounded set $E \subset \R$. Since $f$ is uniformly continuous there exists $\delta > 0$ such that for any $x,y \in E$ with $|x-y| < \delta$, $|f(x) - f(y)| < 1$.
By exercise 1.20 we know that since $E$ is bounded in $\R$, $E$ is totally bounded.
So there exists a finite collection of points $\{x_i\}_{i=1}^n \subset \R$ such that $\{N_{\delta/2} (x_i)\}_{i=1}^n$ covers $E$. For each $i=1,2,\ldots,n$, fix $y_{i} \in N_{\delta/2} (x_i) \cap E$ (if such a $y_i$ exists). Define $I \subseteq \{i\}_{i=1}^n$ by $i \in I \iff y_i$ exists. Then by the triangle inequality
$$ E \subseteq \{ N_{\delta/2} (x_i) \}_{n \in I} \subseteq \{ N_{\delta} (y_i) \}_{n \in I}.$$
Let $B = \max \{ |f(y_i)| \}_{n \in I}$. Then for any $x \in E$, there exists an $y_k$, $k \in I$ such that $|x - y_k| < \delta$, so $|f(x)| < |f(y_k)| + 1 \leq B+1$. Therefore $f(E) \subseteq N_{B+1} (0)$, so $f$ is bounded on $E$.
For an alternative approach, note that since $E$ is bounded there exists an $M \in \R$ such that $E \subseteq [-M,M]$. Then $\overline{E} \subseteq [-M,M]$ by Theorem 2.27. Since $\overline{E}$ is closed and bounded in $\R$, $\overline{E}$ is compact. Since $E$ is dense in $\overline{E}$ and $\R$ is complete, by Rudin problem 4.11 we can extend $f$ to a continuous function $F : \overline{E} \ra \R$ such that $F(x) = f(x)$ for all $x \in E$. By Theorem 4.16 we know that $F(\overline{E})$ is bounded, hence $f(E) = F(E) \subseteq F(\overline{E})$ is also bounded, as desired.
For an counterexample in the case where $E$ is not bounded, consider the function $f: \R \ra \R$ given by $f(x) = x$. For any $\ep > 0$ and any $x \in \R$, $f( N_{\ep} (x)) = N_{\ep} (x) = N_{\ep} (f(x))$, so $f$ is uniformly continuous. However, $f(\R) = \R$ is not bounded.
\begin{R4.10}
\end{R4.10}
Let $f$ be a continuous mapping of a compact metric space $X$ into a metric space $Y$. Assume $f$ were not uniformly continuous. Then there exists an $\ep > 0$ such that for all $\delta > 0$ there exist $x,y \in X$ such that $d_X(x,y) < \delta$ but $d_Y(f(x),f(y)) \geq \ep$. Let $\delta_n := 1/n$. Then for each $n \in \N$, pick $p_n, q_n \in X$ such that $d_X(p_n,q_n) < \delta_n$ but $d_Y(f(p_n),f(q_n)) \geq \ep$. Then since $\delta_n \ra 0$, $d_X(p_n,q_n) \ra 0$.
Since $X$ is compact, by Theorem 3.6 we know that $\{p_n\}_{n=1}^{\infty}$ has a convergent subsequence $\{p_{n_i}\}_{i=1}^{\infty}$. Let $p \in X$ be the limit of this subsequence. Then I claim that $\{q_{n_i}\}_{n=1}^{\infty}$ also converges to $p$.
Fix any $\ep >0$. Then there exists $N_1 \in \N$ such that for all $i \geq N_1$, $d_x(p_{n_i},p) < \ep/2$. There also exists $N_2 \in \N$ such that for all $ n > N_2$, $d_X(p_n,q_n) < \ep/2$.
Since $n_i \geq i$ for all $i \in \N$, for $i > \max\{N_1,N_2\}$ we have by the triangle inequality $d_X (p,q_{n_i}) < \ep$. Since $\ep$ was arbitrary, this shows that $\{q_{n_i}\}_{n=1}^{\infty}$ converges to $p$.
Since $f$ is continuous, by Theorem 4.2 we have that $\lim_{i \ra \infty} f(p_{n_i}) = f(p) = \lim_{i \ra \infty} f(q_{n_i})$. So there exist $M_1, M_2 \in \N$ such that for $i > M_1$, $d_Y(f(p),f(p_{n_i})) < \ep/2$ and for $i > M_2$, $d_Y(f(p),f(q_{n_i})) < \ep/2$. Then for $i > \max\{M_1,M_2\}$, $d_Y(f(p_{n_i}),f(q_{n_i})) < \ep$, contradicting our construction of the points $p_n$ and $q_n$.
Therefore $f$ must be uniformly continuous.
\begin{R4.11}
\end{R4.11}
Let $f: X \ra Y$ be a uniformly continuous function. Let $\{x_n\}$ be a Cauchy sequence in $X$. Fix any $\ep > 0$. Then there exists a $\delta > 0$ such that for $x,y \in X$ with $d_X(x,y) < \delta$, $d_Y(f(x),f(y)) < \ep$. Since $\{x_n\}$ is Cauchy there exists an $N \in \N$ such that for $m,n > N$, $d_X(x_n,x_m) < \delta$. This implies that $d_Y(f(x_n),f(x_m)) < \ep$. Since $\ep$ was arbitrary, $\{f(x_n)\}$ is Cauchy.
Now let $E$ be a dense subset of a metric space $X$ and let $f$ be a uniformly continuous function from $E$ to a complete metric space $Y$. Let $p \in X \backslash E$. Then since $E$ is dense in $X$ there exists a sequence $\{p_n\} \subseteq E$ that converges to $p$. Therefore $\{p_n\}$ is Cauchy, so by the above result $\{f(p_n)\}$ is Cauchy in $Y$. Since $Y$ is complete, $\{f(p_n)\}$ converges in $\R$. Define $f(p)$ to be equal to the limit of $\{f(p_n)\}$.
I claim that the value $f(p)$ is independent of the choice of sequence $\{p_n\}$. Let $\{q_n\}$ be some other sequence in $E$ that converges to $p$ and let $q = \lim_{n \ra \infty} f(q_n)$. Assume that $q \neq f(p)$. Let $\ep = d_Y(q,f(p)) >0$. Then there exists $\delta > 0$ such that for $x,y \in X$ with $d_X(x,y) < \delta$, $d_Y(f(x),f(y)) < \ep/2$.
By the definition of convergence there exist $N_1,N_2, N_3, N_4 \in \N$ such that
\begin{align*}
&\mbox{for } n > N_1, \ \ \ d_X(p_n,p) < \delta/2,\\
&\mbox{for } n > N_2, \ \ \ d_X(q_n,p) < \delta/2,\\
&\mbox{for } n > N_3, \ \ \ d_Y(f(p_n),f(p)) < \ep/4, \\
&\mbox{for } n > N_4, \ \ \ d_X(f(q_n),q) < \ep/4.
\end{align*}
Then for $n > \max\{N_1,N_2, N_3, N_4\}$, $d_X(p_n,q_n) < \delta$, so $d_Y(f(q_n),f(p_n)) <\ep/2$. Therefore
$$d_Y(q,f(p)) \leq d_Y(q,f(q_n)) + d_Y(f(q_n),f(p_n)) + d_Y(f(p_n),f(p)) < \ep/4 + \ep/2 +\ep/4 = \ep = d_Y(q,f(p)).$$
Contradiction. Therefore $f(p)$ is limit of $\{f(p_n)\}$ for any sequence $\{p_n\}$ that converges to $p$.
So we have a well-defined function $f: X \ra \R$. We want to show that $f$ is continuous on X. Note that by our construction of $f$, for any $p \in X \backslash E$ and any $\ep, \delta > 0$, by taking $n$ sufficiently large in any sequence $\{p_n\}$ that converges to $p$ we can find a point $p_n \in N_{\delta}(p)$ such that $d_Y(f(p),f(p_n)) < \ep$.
Fix any point $p \in X $ and any $\ep > 0$. Then by uniform continuity we know that there exists a $\delta > 0$ such that for any $x,y \in E$ with $|x-y| < \delta$ we have $|f(x) - f(y)| < \ep/2$. By the construction of $f(p)$ we know that there is an $x \in E$ such that $x \in N_{\delta/2} (p)$ and $d_Y(f(p),f(x)) < \ep/4$ (if $p \in E$, just let $x=p$).
\begin{itemize}
\item For $y \in N_{\delta/2}(p) \cap E$ we have $d_Y(f(p),f(y)) \leq d_Y(f(p),f(x)) + d_Y(f(x),f(y)) < \ep/4 + \ep/2 <\ep$.
\item For any $q \in N_{\delta/2}(p) \backslash E$, by our construction of $f(q)$ we know that there exists $z \in N_{\delta/2} (p)$ such that $d_Y(f(q),f(z)) < \ep/4$. By the triangle inequality we then have $d_Y(f(p),f(q)) \leq d_Y(f(p),f(x)) + d_Y(f(x),f(z)) + d_Y(f(z),f(q))< \ep/4+ \ep/2 + \ep/4< \ep$.
\end{itemize}
So $f(N_{\delta/2}(p)) \subseteq N_{\ep}(f(p))$. Since $\ep$ was arbitrary, $f$ is continuous at $p$. Therefore $f$ is continuous on $X$, as desired. The idea of continuous extensions is useful in a number of contexts, for example problem 4.8.
\section{Continuous Functions: Examples}
\begin{R4.14}
\end{R4.14}
Let $f$ be a continuous mapping of $I = [0,1]$ into itself. Define a new function $h:I \ra \R$ by $h(x) = f(x) - x$. Since $f$ is continuous and $g(x) = -x$ is continuous, by Theorem 4.9 we know that $h$ is continuous.
Since $f(I) \subseteq I$, $h(0) \in I$. If $h(0) = 0$ then $f(0)=0$ and we are done, so assume $h(x) \neq 0$, so $h(0) > 0$.
Similarly, $h(1) \in [-1,0]$ and if $h(1) = 0$ then $f(1)=1$ and we are done, so assume $h(x) \neq 0$, so $h(1) < 0$.
Then by Theorem 4.23 there must be some $x \in (0,1)$ such that $h(x) = 0$, so $f(x) = x$, as desired.
\vbox{
\begin{R4.18}
\end{R4.18}
Fix any point $y \in \R$ and any $\ep >0$. Then there exists $N \in \N$ such that $1/N < \ep$. Fix any $m < N$. Let $L_m$ be the largest integer such that $L_m/m < y$ and let $U_m$ be the smallest integer such that $U_m/m > y$.
Let $\delta = \min \{(y-L_m),(U_m-y)\}_{m=1}^{N-1}>0$.
}
For any $x \in N_{\delta} (y) \backslash \{y\}$, either $x$ is irrational or $x=m/n$ where $m,n$ are integers without common divisors and $n \geq N$. In both cases we have $f(x) < \ep$. Therefore $\lim_{x \ra y} f(x) = 0$ for all $y \in \R$.
If $y$ is irrational then $f(y) = 0$, so by Theorem 4.6 we have that $f$ is continuous at $y$. If $y$ is rational than $f(y) > 0$, so again by Theorem 4.6 we know that $f$ is not continuous at $y$. However, since $\lim_{x \ra y} f(x)$ exists, the discontinuity at $y$ is a simple discontinuity.
\begin{R4.20}
\end{R4.20}
\be
\litlet
\item $\Rightarrow$ Assume that $\rho_E(x) =0$. By the definition of $\rho_E(x)$ we have $\inf_{z \in E} d(x,z) = 0$. So either there exists a $z \in E$ such that $d(z,x) = 0$ or for each $n \in N$ there exists $z_n \in E$ such that $0 < d(x,z_n) \leq 1/n$. In the former case, since $d$ is a metric we have $x=z$, so $x \in E \subseteq \overline{E}$. In the later case, $\{z_n\}$ is a sequence in $E$ converging to $x$, so $x$ is a limit point of $E$, which implies $x \in \overline{E}$.
$\Leftarrow$ Assume that $x \in \overline{E}$. Then either $x \in E$ or $x$ is a limit point of $E$. If $x \in E$ then since $d(x,x) = 0$ we have $\rho_E(x) = 0$. If $x$ is a limit point of $E$ then for every $\ep > 0$ we know that there exists $z \in E$ such that $d(x,z)< \ep$, therefore $\rho_E(x) = \inf_{z \in E} d(x,z) = 0$.
\item Fix any two points $x,y \in X$ any any non-empty set $E \subset X$. Let $z$ be any point in $E$. Then
$$\rho_E(x) = \inf_{p \in E} d(x,p) \leq d(x,z) \leq d(x,y) + d(y,z).$$
Since this holds for all $z \in E$ we have
$$\rho_E (x) \leq d(x,y) + \inf_{p \in E} d(y,p) = d(x,y) + \rho_E (y).$$
Therefore $\rho_E(x) - \rho_E(y) \leq d(x,y)$. By identical argument with the roles of $x$ and $y$ reversed we obtain $\rho_E(y) - \rho_E(x) \leq d(x,y)$. Therefore $|\rho_E(x) - \rho_E(y)| \leq d(x,y)$.
Therefore, for any $\ep >0$ and any $x,y \in X$, if $d(x,y) < \ep$ then $|\rho_E(x) - \rho_E(y)| < \ep$. So $\rho_E$ is a uniformly continuous function on $X$.
\ee
\begin{R4.22}
\end{R4.22}
Since $A$ and $B$ are closed, $\overline{A} = A$ and $\overline{B} = B$. So by problem 4.20 we know that $\rho_A (x) = 0 \iff x \in A$ and $\rho_B (x) = 0 \iff x \in B$. . Then since $A$ and $B$ are disjoint, for every $x \in X$ we know that at least one of $\rho_A (x)$ and $\rho_B (x)$ must be non-zero. As infimums of distances (which must all be non-negative), the functions $\rho_A (x)$ and $\rho_B (x)$ are both non-negative. Therefore $\rho_A(x) + \rho_B(x)$ is positive for all $x \in X$.
Also by problem 4.20 we know that $\rho_A(x)$ and $\rho_B(x)$ are continuous functions.
So by Theorem 4.9, $f(x)$ is a continuous function.
For any $x \in X$ we have $0 \leq \rho_A (x) \leq \rho_A(x)+\rho_B(x)$.
This, combined with the fact that $\rho_A(x) + \rho_B(x)$ is non-zero, implies that $f(x) \in [0,1]$ for all $x \in X$.
$f(p) = 0$ if and only if the numerator $\rho_A(p)$ is zero. As argued above, this occurs if and only if $p \in A$.
$f(p) = 1$ if and only if the numerator of $f(p)$ is equal to the denominator of $f(p)$. Canceling the $\rho_A(p)$ terms we see that this equality occurs if and only if $\rho_B(p) = 0$. As argued above, this occurs if and only if $p \in B$.
Let $V := f^{-1} ([0,\frac{1}{2}))$ and $W := f^{-1}((\frac{1}{2},1])$. Since $f: X \ra [0,1]$ and the sets $[0,\frac{1}{2}), (\frac{1}{2}, 1]$ are open in $[0,1]$, the pre-images of these sets under the continuous function $f$ must be open in $X$. Therefore $V$ and $W$ are open.
Since $[0,\frac{1}{2})$ and $(\frac{1}{2},1]$ are disjoint and every point in $X$ is mapped to a unique image point in $[0,1]$, the pre-images $V$ and $W$ will be disjoint.
Since $f(p) = 0$ for all $p \in A$, $A \subseteq V$.
Similarly, since $f(p) = 1$ for all $p \in B$, $B \subseteq W$.
\begin{R4.23}
\end{R4.23}
\begin{lemma}
Let $f$ be convex in $(a,b)$ and $a < s < t < u < b$. Then
\begin{equation}{\label{Eq:Cx1}}
\frac{f(t)-f(s)}{t-s} \leq \frac{f(u)-f(s)}{u-s} \leq \frac{f(u)-f(t)}{u-t}.
\end{equation}
\end{lemma}
\begin{proof}
Note that $t = \frac{t-s}{u-s} u + \frac{u-t}{u-s} s$. Therefore, by the convexity of $f$ we have
\begin{equation}{\label{Eq:Cx2}}
f(t) = f \left( \frac{t-s}{u-s} u + \frac{u-t}{u-s} s \right) \leq \frac{t-s}{u-s} f(u) + \frac{u-t}{u-s} f(s).
\end{equation}
Subtracting $f(s)$ from (\ref{Eq:Cx2}) we derive
\begin{align*}
f(t) - f(s) &\leq \frac{t-s}{u-s} f(u) + \frac{u-t}{u-s} f(s) - f(s) \\
f(t) - f(s) &\leq \frac{t-s}{u-s} f(u) - \frac{t-s}{u-s} f(s) \\
\frac{f(t) - f(s)}{t-s} &\leq \frac{f(u) - f(s)}{u-s}.
\end{align*}
This proves the first inequality in (\ref{Eq:Cx1}).
For the second inequality, subtracting (\ref{Eq:Cx2}) from $f(u)$ yields
\begin{align*}
f(u) - f(t) &\geq f(u) - \frac{t-s}{u-s} f(u) - \frac{u-t}{u-s} f(s) \\
f(u) - f(t) &\geq \frac{u-t}{u-s} f(u) - \frac{u-t}{u-s} f(s) \\
\frac{f(u) - f(t)}{u-t} &\geq \frac{f(u) -f(s)}{u-s},
\end{align*}
which is the desired result.
\end{proof}
I begin by giving the geometric reason for why any convex function is continuous (following green Rudin) and then provide the detailed delta/epsilon proof.
\begin{multicols}{2}
Let $f$ be convex on $(a,b)$. For any given point $x \in (a,b)$ fix points $s,y,t$ such that $a < s < x < y < t < b$.
Let $S$ be the point $(s, f(s))$. Define $X,Y,T$ similarly. Then by convexity the point $Y$ is on or above the line $\overleftrightarrow{S X}$ but on or below the line $\overleftrightarrow{X T}$. So as $y$ approaches $x$ from the right, $f(y)$ approaches $f(x)$ since it is sandwiched between these two lines. Similarly, the left-hand limit as $f$ approaches $x$ is $f(x)$. Therefore $f$ is continuous at $x$.
\begin{pspicture}(-1,0)(6,5.5)
\qbezier(.5,5)(3,0)(5.5,5)
\psdot(2.5,2.6)
\psdot(1.25,3.72)
\psdot(4.75,3.7)
\psdot(3.75,2.7)
\psline(1.25,3.72)(5,.36)
\psline(4.75,3.7)(.25,1.5)
\rput[tr](1.2,3.65){S}
\rput[t](2.45,2.4){X}
\rput[tl](4.8,3.65){T}
\rput[tl](3.8,2.65){Y}
\rput[tl](5.5,4.8){f(x)}
\end{pspicture}
\end{multicols}
Fix any point $x \in (a,b)$ and any $\ep > 0$. Since $(a,b)$ is open there exists a $\rho > 0$ such that $N_{\rho}(x) \subseteq (a,b)$.
Let $s := x - \rho$ and $t := x + \rho$.
Let $y \in (x,t)$.
Then by the above lemma we have:
$$\frac{f(y)-f(x)}{y-x} \leq \frac{f(t) -f(x)}{t-x} = \frac{f(t) -f(x)}{\rho} \ \ \ \Rightarrow \ \ \ f(y) - f(x) \leq \frac{f(t)-f(x)}{\rho} (y-x).$$
Also by the above lemma
$$\frac{f(y)-f(x)}{y-x} \geq \frac{f(x) - f(s)}{x-s} = \frac{f(x) - f(s)}{\rho} \ \ \ \Rightarrow \ \ \ f(y) - f(x) \geq \frac{f(x)-f(s)}{\rho} (y-x).$$
Fix $\delta > 0$ such that $\ds \delta < \ep \cdot \min \left\{ \frac{\rho}{|f(t)-f(x)|}, \frac{\rho}{|f(x)-f(s)|}, \rho \right\}$. Then for $y \in (x,x+\delta)$ we have
$$f(y) - f(x) \leq \frac{f(t)-f(x)}{\rho} (y -x) \leq \frac{|f(t)-f(x)|}{\rho} \, \delta \leq \ep$$
and
$$f(x) - f(y) \leq \frac{f(s)-f(x)}{\rho} (y -x) \leq \frac{|f(s)-f(x)|}{\rho} \, \delta \leq \ep$$
Therefore $|f(y) - f(x)| \leq \ep$. Since this holds for all $\ep >0$ the right hand limit as $f$ approaches $x$ is equal to $f(x)$. By a similar argument, the left hand limit as $f$ approaches $x$ is also $f(x)$. Therefore, $f$ is continuous at $x$. Since this holds for all $x \in (a,b)$, $f$ is continuous on $(a,b)$.
Note that this proof depended upon $f$ being defined in the {\sl open} interval $(a,b)$. On a closed interval a function can be convex yet not be continuous.
Now let $f: (a,b) \ra \R$ be a convex function and let $g$ be an increasing convex function on the range of $f$.
Fix any $t \in [0,1]$ and any two points $x,y$ such that $a < x \leq y < b$.
By the convexity of $f$ we know that $f(z) \leq (1-t) f(x) + t \cdot f(y)$.
Combining this with the fact that $g$ is non-decreasing we have:
\begin{align*}
g(f( (1-t)x+t \cdot y)) &\leq g ((1-t) f(x) + t \cdot f(y)) \\
& \leq (1-t) g (f(x)) + t \cdot g (f (y))
\end{align*}
with the second inequality following since $g$ is convex.
Therefore $h(x)$ is convex on $(a,b)$.
Convexity plays an important role in a number of the common inequalities of analysis, including Jensen's, Holder's, and Minkowski's inequalities.
\begin{ex}
\end{ex}
\begin{multicols}{2}
For $n \in \N, n > 1$ consider the functions
$$f_n (x) =
\begin{cases}
0 & \mbox{ if $x \in [0,\frac{1}{2} - \frac{1}{n}]$}\\
n x - \frac{n}{2} + 1 & \mbox{ if $x \in [\frac{1}{2} - \frac{1}{n}, \frac{1}{2}]$}\\
-n x + \frac{n}{2} + 1 & \mbox{ if $x \in [\frac{1}{2}, \frac{1}{2} + \frac{1}{n}]$}\\
0 & \mbox{ if $x \in [\frac{1}{2} + \frac{1}{n}, 1]$}
\end{cases}
$$
Let $g(x)$ be the zero function. Then
$d(f_n,g) = \int_0^1 |f_n(x)| d x = \frac{1}{n}$. Therefore $\{f_n\}$ converges to $g$ in the $L^1$ metric.
\begin{pspicture}(-.5,-.5)(6.5,5)
\psline(0,-.5)(0,5)
\psline(-.5,0)(6,0)
\psline[linewidth=0.1,dotsize=.2, arrows=*-*](0,0)(2,0)(3,4)(4,0)(6,0)
\psline(-.1,4)(.1,4)
\psline(2,-.1)(2,.1)
\psline(3,-.1)(3,.1)
\psline(4,-.1)(4,.1)
\psline(6,-.1)(6,.1)
\rput[r](-.1,4){1}
\rput[t](2,-.15){$\frac{n-2}{2n}$}
\rput[t](3,-.15){$\frac{1}{2}$}
\rput[t](4,-.15){$\frac{n+2}{2n}$}
\rput[t](6,-.15){1}
\rput[tr](-.15,-.15){0}
\rput[ll](3.65,2.15){$f_n (x)$}
\end{pspicture}
\end{multicols}
However, $F(f_n) = 1$ for all $n \in \N_{>1}$ while $F(g) = 0$, so $\{F(f_n)\}$ does not converge to $F(g)$. Therefore $F$ is not continuous at $g$.
Note that if we put the uniform metric on $X$ then $F$ is a continuous function (in fact $F$ is uniformly continuous).
\begin{ex}
\end{ex}
$\Rightarrow$ Assume that $F$ is bounded, so $|F(f)| \leq C \| f \|$ for all $f \in X$ for some $C \in \R$.
Fix any $f \in X$, $\ep >0$.
Then for any $g \in N_{\ep/C}$, by linearity and boundedness we have
$$|F(g) - F(f)| = |F(g-f)| \leq C \|g-f\| < C \frac{\ep}{C} = \ep.$$
Therefore $F$ is continuous at $f$. Since this holds for all $f \in X$, $F$ is continuous on $X$.
$\Leftarrow$ Assume that $F$ is continuous on $X$. Then for any $f \in X$, $F$ is continuous at $f$. So there exists a $\delta>0$ such that for all $g \in N_{\delta}(f)$, $|F(g) - F(f)| < 1$.
Now fix any $g \in X$.
Then by linearity
$$ \left| F \left( \frac{\delta}{2 \| g \|} g \right) \right| = \left| F \left( \frac{\delta}{2 \| g \|} g - f \right) - F(f) \right| < 1$$
Therefore we have
$$|F(g)| = \left| \frac{2 \| g \|}{\delta} F \left( \frac{\delta}{2 \| g \|} g \right) \right| < \frac{2 \| g \|}{\delta}$$
Since $g$ was arbitrary and $\delta$ does not depend on $g$, this shows that $F$ is bounded.
Note that the reverse direction of this proof depended only on $F$ be continuous at any one $f \in X$. Therefore we have actually shown the slightly stronger statement that boundedness, continuity on $X$, and continuity at a single point of $X$ are all equivalent for the function $F$.
For any bounded $F$, the infimum of the set of numbers $C$ such that $|F(f)| \leq C \|f\|$ for all $f \in X$ is called the {\sl norm} of $F$. This norm can be used to put a metric on the space of bounded linear functionals $F: X \ra \R$.
\chapter{Sequences of Functions}
\section{Uniform Convergence}
\begin{R7.1}
\end{R7.1}
Let $\{f_n\}$ be a sequence of bounded functions on $E$ that converge uniformly to a function $f$.
Then there exists $N \in \N$ such that for all $n \geq N$ and all $x \in E$, $|f_n(x)-f(x)| \leq 1$.
For all $n \in \N$ we know that $f_n$ is bounded, so for all $n \in \N$ there exists an $M_n \in \R$ such that $|f_n (x)| \leq M$ for all $x \in E$.
Then by the triangle inequality we know that $|f(x)| \leq M_N + 1$ for all $x \in E$.
Applying the triangle inequality again we have that for $n \geq N$, $|f_n (x)| \geq M_N + 2$ for all $x \in E$.
Let $M = \max \{M_1, M_2, \ldots, M_{N-1}, M_N+2\}$.
Then for all $x \in E$ and for all $n \in \N$ we have $|f_n (x)| \leq M$.
Therefore $\{f_n\}$ is uniformly bounded, as desired.
\begin{R7.2}
\end{R7.2}
Let $\{f_n\}$ converge uniformly to $f$ on $E$ and let $\{g_n\}$ converge uniformly to $g$ on $E$.
Fix any $\ep > 0$.
Then there exist $N_1, N_2 \in \N$ such that for $n > N_1$, $x \in E$ we have $|f_n (x) - f(x)| \leq \ep/2$ and for $n > N_2$, $x \in E$ we have $|g_n (x) - g(x)| \leq \ep/2$.
Then for $n > N = \max\{N_1,N_2\}$, by the triangle inequality we have
$$ |(f_n+g_n)(x)- (f+g)(x)| \leq |f_n (x) - f(x)| + |g_n (x) -g (x) | \leq \ep/2 + \ep/2 = \ep$$
for all $x \in E$. Therefore $\{f_n + g_n\}$ converges uniformly to $f+g$ on $E$.
Now assume additionally that $\{f_n\}$ and $\{g_n\}$ are bounded functions.
By the proof of Rudin exercise 7.1, we know that there is a real number $M_1$ that bounds all the $\{f_n\}$ and also the limit function $f$. Similarly, there exists an $M_2$ that bounds the $\{g_n\}$ and the function $g$.
Let $M = \max\{M_1,M_2\}$.
Fix any $\ep > 0$. Proceeding as above we know that there is an $N \in \N$ such that for all $n > N$ and $x \in E$ we have $|f_n (x) - f(x)| \leq \ep/2M$ and $|g_n (x) - g(x)| \leq \ep/2M$.
Then for all $n > N$ and $x \in E$ we have
\begin{align*}
|f_n (x) g_n(x) - f(x) g(x)| &\leq |f_n (x) g_n(x) - f_n (x) g(x)| + |f_n (x) g(x) - f(x) g(x)|\\
&= |f_n (x)| |g_n(x) - g(x)| + |g(x)| |f_n (x) - f(x)|\\
&\leq M \cdot \frac{\ep}{2M} + M \cdot \frac{\ep}{2M} \\
&= \ep.
\end{align*}
Therefore $\{ f_n g_n\}$ converges uniformly to $f g$ on $E$.
\begin{R7.3}
\end{R7.3}
By the previous exercise we know that at least one of the sequences $\{f_n\}$ and $\{g_n\}$ must contain unbounded functions.
In $\R$, let $f_n (x) = x$ for all $n \in \N$. Then $\{f_n\}$ converges uniformly to the function $f(x) =x$.
Let $g_n (x) = 1/n$ for all $n \in \N$. Then $\{g_n\}$ converges uniformly to the zero function $g(x) = 0$.
$(f_n g_n) (x) = x/n$, so $\{f_n g_n\}$ converges point-wise to the zero function. For any fixed $N \in \N$, choose $y > N$. Then $(f_n g_n) (y) > 1$. Therefore $\{f_n g_n\}$ does not converge uniformly to the zero function. By Theorem 7.9 this implies the $\{f_n g_n\}$ does not converge uniformly on $\R$.
\begin{R7.9}
\end{R7.9}
Let $\{f_n\}$ be a sequence of continuous functions which converges uniformly to a function $f$ on a set $E$. Fix any point $x \in E$ and let $\{x_n\}$ be any sequence of points in $E$ that converges to $x$.
Fix any $\ep > 0$.
Since $f$ is continuous at $x$ (Theorem 7.12) there exists a $\delta > 0$ such that for all $y \in N_{\delta} (x)$, $f(y) \in N_{\ep/2}(f(x))$.
Since $\{x_n\}$ converges to $x$ there exists an $N_1 \in \N$ such that for all $n>N_1$, $d(x,x_n) < \delta$.
Also, from uniform convergence we know that there exists an $N_2 \in \N$ such that for all $n>N_2$ and all $x \in E$, $|f_n(x) - f(x)| \leq \ep/2$.
Let $N = \max \{N_1,N_2\}$.
Then for $n > N$ we have $|f_n (x_n) - f (x)| \leq |f_n (x_n) - f (x_n)| + |f (x_n) - f(x)| \leq \ep/2 + \ep/2 = \ep$.
Therefore $\lim_{n \ra \infty} f_n (x_n) = f(x)$, as desired.
The converse is not true.
Consider the functions $f_n (x) = x/n$ on $\R$. Let $f$ be the zero function on $\R$. Fix any $x \in R$ and let $\{x_n\}$ be any sequence that converges to $x$.
Fix any $\ep > 0$.
Since $\{x_n\}$ converges to $x$ there exists $N \in \N$ such that for all $n > N$, $|x_n - x| < 1$.
Then for $n > N$ we have $|f_n (x_n)| \leq \frac{|x|+1}{n}$.
So for $n > \frac{1}{\ep} \cdot \max\{N, {|x|+1} \}$ we have $|f_n (x_n)| \leq \frac{|x|+1}{n} \leq \ep$.
Since $\ep$ was arbitrary, this implies that $\lim_{n \ra \infty} f_n (x_n) = 0 = f(x)$.
However, as argued in problem 7.3, $\{f_n\}$ does not converge to $f$ uniformly on $\R$.
\section{Equicontinuous Families of Functions}
\begin{R7.13}
\end{R7.13}
Let $\{f_n\}$ be a sequence of monotonically increasing functions on $\R$ with $0 \leq f_n (x) \leq 1$ for all $x \in \R$ and all $n \in \N$.
Since $\Q$ is countable and the $\{f_n\}$ are point-wise bounded, by Theorem 7.23 we know that there exists a sequence $\{n_k\}$ such that $\{f_{n_k} (r) \}$ converges for all $r \in \Q$.
Define $g(r) : \Q \ra \R$ by letting $g(r)$ be the limit of $\{f_{n_k} (r)\}$ for all $r \in \Q$.
Note that since the $f_n$ are all monotone increasing, for $x < y$ we have $g(x) = \lim_{n \ra \infty} f_n (x) \leq \lim_{n \ra \infty} f_n (y) = g(y)$, so $g(x)$ is a non-decreasing function.
Define a function $f$ on $\R$ by letting $f(x) = \sup \{ g(r) : r \in \Q, r \leq x\}$.
Since $g$ is non-decreasing on $\Q$, for $r \in \Q$ we have $f(r) = g(r)$, so $\lim_{n \ra \infty} f_n(r) = f(r)$ for $r \in \Q$.
Note that from its definition the new function $f$ is non-decreasing on $\R$.
Let $x \in \R$ be any point at which $f$ is continuous.
Fix any $\ep > 0$.
Then there exists a $\delta > 0$ such that for all $y \in N_{\delta} (x)$, $f(y) \in N_{\ep/2}(f(x))$.
Since $\Q$ is dense in $\R$ there exists a point $y \in (x- \delta, x) \cap \Q$ and a point $z \in (x,x + \delta) \cap \Q$.
Since $y,z \in \Q$ we know that $\lim_{k \ra \infty} f_{n_k} (y) = f(y)$ and $\lim_{k \ra \infty} f_{n_k} (z) = f(z)$.
So there exists $K \in \N$ such that for $k > K$ we have both $|f_{n_k} (y) - f(y)| \leq \ep/2$ and $|f_{n_k} (z) - f(z)| \leq \ep/2$.
By the triangle inequality, for $k > K$ we have $|f_{n_k} (y) - f(x)| \leq \ep$ and $|f_{n_k} (z) - f(x)| \leq \ep$. Since $f_{n_k}$ is an increasing function, $f_{n_k} (y) < f_{n_k} (x) < f_{n_k} (z)$.
Combining these results we have for $k > K$
\begin{equation*}
f_{n_k} (x) - f(x) \leq f_{n_k} (z) - f(x) \leq \ep \ \ \ \ \ \ \mbox{ and } \ \ \ \ \ \
f(x) - f_{n_k} (x) \leq f(x) - f_{n_k} (y) \leq \ep.
\end{equation*}
Therefore $|f_{n_k} (x) - f(x)| \leq \ep$ for all $k > K$.
Since $\ep$ was arbitrary, this implies that $\lim_{k \ra \infty} f_{n_k} (x) = f(x)$.
Since $f(x)$ is a monotone non-decreasing function on $\R$ we know by Theorem 4.30 that $f(x)$ has at most countably many discontinuities. Then again applying Theorem 7.23 we know that there exists a subsequence $\{f_{n_{k_i}}\}$ that converges at every point of discontinuity of $f$. Define $h(x)$ to be equal to $f(x)$ where $f$ is continuous and equal to the limit of this subsequence where $f$ is discontinuous. Then $ \{ f_{n_{k_i}} \}$ converges point-wise to $h(x)$, as desired.
\begin{R7.16}
\end{R7.16}
Let $\{f_n\}$ be an equicontinuous sequence of functions on a compact set $K$ such that $\{f_n\}$ converges point-wise to a function $f$ on $K$.
Since the $f_n$ are continuous functions on a compact space $K$ they are all bounded by Rudin theorem 4.16. Hence $f_n \in \mathcal{C} (K)$ for all $n \in \N$.
Assume that $\{f_n\}$ did not converge uniformly to $f$. Then there exists an $\ep > 0$, such that for all $N \in \N$ there exists an $n > N$ and an $x \in K$ such that $|f_n (x) - f(x)| \geq \ep$.
So pick $n_1$ such that there exists an $x \in K$ such that $|f_{n_1} (x) - f(x) | \geq \ep$.
Then given $n_1, n_2, \ldots, n_{k-1}$, pick $n_k > n_{k-1}$ such that there exists an $x \in K$ such that $|f_{n_k} (x) -f(x) | \geq \ep$.
As noted in Rudin section 3.5, for each $x \in K$, the subsequence $\{f_{n_i} (x)\}$ will converge to the same limit as the original sequence, namely $f(x)$.
So $\{f_{n_i}\}$ converges point-wise to $f(x)$.
Therefore $\{f_{n_i}\}$ is point-wise bounded (Rudin Theorem 3.2).
Since $\{f_{n_i}\} \subseteq \{f_n\}$ we know that $\{f_{n_i}\}$ is equicontinuous.
Therefore by Theorem 7.25 $\{f_{n_i}\}$ must contain a uniformly convergent subsequence.
Since this subsequence also must converge point-wise to $f(x)$, by Rudin Theorem 7.9 we know that the subsequence must converge uniformly to $f(x)$. But by our construction, for each element $f_{n_{i_k}}$ of this subsequence there is an $x \in K$ such that $|f_{n_{i_k}} (x) - f(x)| \geq \ep$, contradicting the definition of uniform convergence.
Therefore the entire sequence $\{f_n\}$ must converge uniformly to $f$.
\begin{R7.18}
\end{R7.18}
Let $M$ be a uniform bound of the $\{|f_n|\}$. Then for any $x,y$ such that $a \leq x < y \leq b$ and for any $n \in \N$ we have
\begin{equation}{\label{E7.18}}
|F_n (y) - F_n(x)| = \left| \int_a^y f_n (t) d t - \int_a^x f_n (t) d t \right|
= \left| \int_x^y f_n (t) d t \right|
\leq \int_x^y |f_n(t)| d t
\leq M |y-x|.
\end{equation}
Therefore $\{F_n\}$ is equicontinuous.
Letting $x = a$ in (\ref{E7.18}) we have that for any $y \in [a,b]$ and any $n \in \N$, $|F_n(y) - F_n(a)| \leq M | y - a | \leq M |b-a|$. Therefore each $F_n$ is bounded and the $\{F_n (x)\}$ is point-wise (and uniformly) bounded. Therefore $F_n \in \mathcal{C} ([a,b])$ for all $n \in \N$.
Since $[a,b]$ is compact, by Theorem 7.25 we know that there exists a subsequence $\{F_{n_i}\}$ that converges uniformly on $[a,b]$.
\chapter{Riemann Integration}
\begin{ex}
\end{ex}
Define $f: [a,b] \ra \R$ by $f(x) = 1$ for all $x \in \Q$ and $f(x) = 0$ for all $x \not\in \Q$.
Let $P=\{x_0,x_1,\ldots,x_n\}$ be any partition of $[a,b]$.
Since $\Q$ and $\R \backslash \Q$ are both dense in $[a,b]$ each interval of the partition $P$ will contain both rational and irrational numbers.
Therefore $U(P,f) = \sum_{i=1}^{n-1} (x_{i+1}-x_i) = (b-a)$ and $L(P,f) = 0$ for all partitions $P$.
So the upper Riemann integral of $f$ over $[a,b]$ is 1 while the lower Riemann integral of $f$ over $[a,b]$ is zero, so $f$ is not Riemann integrable.
When we study Lebesgue integration we will see that $f$ is Lebesgue integrable with $\int_a^b f(x) d x = 0$.
\begin{ex}
\end{ex}
Given an interval $[a,b]$ and a finite collection of points $\{y_i\}_{i=1}^N \in [a,b]$, define $f : [a,b] \ra \R$ by $f(x) = 1$ for all $x \in \{y_i\}_{i=1}^N$ and $f(x) = 0$ otherwise.
Let $P_n$ be the partition of $[a,b]$ into $n$ equal intervals for $n \in \N$. Then there are only $N$ points of $f$ with non-zero value and these points all have value 1. Each of these points can be in at most 2 of the intervals of our partition, so
\begin{equation}{\label{E6.1}}
U(P_n,f) \leq \frac{b-a}{n} \cdot 2N \ \ \ \ \ \mbox{ and } \ \ \ \ \ L(P_n,f) \geq 0.
\end{equation}
Therefore $U(P_n,f) - L(P_n,f) \leq \frac{b-a}{n} \cdot 2N$.
So given any $\ep >0$, taking $n \geq \frac{(b-a)2N}{\ep}$ we have $U(P_n,f) - L(P_n,f) \leq \ep$.
Therefore by Theorem 6.6 $f$ is integrable on $[a,b]$.
From (\ref{E6.1}) we see that $\inf U(P,f) \leq 0$ and $\sup L(P,f) \geq 0$ where the infimum and supremum are taken over all partitions $P$ of $[a,b]$.
Since $f$ is Riemann integrable these two values must both be equal to $\int_a^b f(x) d x$.
Hence $\int_a^b f(x) d x = 0$.
\begin{ex}
\end{ex}
Since $\Q \cap [0,1]$ is countable we can list its elements. Let $\{x_i\}_{i=1}^{\infty} = \Q \cap [0,1]$ where all the $x_i$ are distinct.
Define $f_n (x) = 1$ for all $x \in \{x_i\}_{i=1}^n$ and $f_n(x) = 0$ otherwise.
Then for all $n \in \N$, $f_n(x) = 0$ for all irrational $x$.
For any $x_i \in \Q \cap [0,1]$, $f_n(x_i) = 1$ for all $n \geq i$.
Therefore the $\{f_n\}_{n=1}^{\infty}$ converge point-wise to the function $f$ defined in exercise 4.1.
From exercise 4.1 we know that $f(x)$ is not Riemann integrable, therefore $\int_0^1 \lim_{n \ra \infty} f_n (x) d x$ does not exist.
However, by exercise 4.2, for all $n \in \N$ we know that $\int_0^1 f_n (x) d x = 0$, so $\lim_{n \ra \infty} \int_0^1 f_n (x) d x = 0$.
As noted above, $f(x)$ is Lebesgue integrable, and using Lebesgue integration we would have
$$\int_0^1 \lim_{n \ra \infty} f_n (x) d x = \int_0^1 f(x) d x = 0 = \lim_{n \ra \infty} \int_0^1 f_n (x) d x.$$ This ability to interchange limits and Lebesgue integrals is an application of Lebesgue's dominated convergence theorem. This is one of the ways in which Lebesgue integration is 'nicer' than Riemann integration.
\begin{ex}
\end{ex}
Let $f : [a,b] \ra [0, \infty)$ be a continuous function such that $\int_a^b f(x) dx = 0$.
Assume that there exists a point $p \in [a,b]$ such that $f(p) \neq 0$.
Then by continuity, there exists $\delta > 0$ such that for all $y \in N_{\delta} (p)$, $|f(y) - f(p)| < |f(p)|/2$. So for $y \in N_{\delta} (p)$, $|f(y)| > |f(p)|/2$.
If $p \not\in \{a,b\}$, let $\delta' = \min\{\delta,b-p,p-a\}$.
Then
\begin{equation}{\label{E:6.2}}
\int_0^1 f(x) dx = \int_0^1 |f(x)| dx \geq \int_{p-\delta'}^{p + \delta'} |f(x)| dx \geq \frac{|f(P)|}{2} \cdot 2 \delta' > 0,
\end{equation}
contradicting the assumption that $\int_a^b f(x) = 0$. Therefore $f$ must be exactly zero on $[a,b]$.
If $p$ is either $a$ or $b$ then the calculation (\ref{E:6.2}) can be repeated with $\delta' = \min\{ \delta, b-a\}$ and with the $\delta'$ neighborhood about $p$ now extending in only one direction from $p$.
\begin{ex}
\end{ex}
\be
\litlet
\item For any $c \in (0,1)$, $\int_0^1 f(x) dx = \int_0^c f(x) dx + \int_c^1 f(x) dx$. Therefore, it is sufficient to show that $\lim_{c \ra 0^+} \int_0^c f(x) dx = 0$.
Since $f$ is Riemann integrable on $[0,1]$, $f$ is bounded on $[0,1]$. So there exists an $M$ such that $|f(x)| 0$. Let $L = \sum_{n=1}^{\infty} \frac{(-1)^n}{n}$. Then there exists an $M \in \N$ such that for $m \geq M$, $ \left| L - \sum_{n=1}^{m} \frac{(-1)^n}{n} \right| \leq \ep/2$.
Fix $M' \in \N$ such that $M' \geq M$ and $1/M' \leq \ep/2$. Fix any $c \in \left( 0, \frac{1}{M'+1} \right] $. Then there exists a $m \in \N$ such that $c \in \left( \frac{1}{m+2}, \frac{1}{m+1} \right]$. So $m \geq M' \geq M$ and $1/(m+1) \leq \ep/2$.
Then by (\ref{E6.7})
$$ \int_c^1 f(x) dx = \int_c^{\frac{1}{m+1}} f(x) dx + \sum_{n=1}^{m} \int_{\frac{1}{n+1}}^{\frac{1}{n}} f(x) dx
= \left(\frac{1}{m+1} - c \right) \cdot (-1)^{m+1} \cdot (m+2) + \sum_{n=1}^m \frac{(-1)^n}{n}.$$
Therefore
$$\left| L - \int_c^1 f(x) dx \right| \leq
\left| L - \sum_{n=1}^m \frac{(-1)^n}{n} \right| + \left| \left(\frac{1}{m+1} - c \right) \cdot (-1)^{m+1} \cdot (m+2) \right| \leq
\frac{\ep}{2} + \left|\frac{1}{m+1} \right| \leq \frac{\ep}{2} + \frac{\ep}{2} = \ep.
$$
Therefore $\lim_{c \ra 0} \int_c^1 f(x) dx = L$.
However, for $m \in \N$ by (\ref{E6.7}) we have
$$\int_{\frac{1}{m+1}}^1 |f(x)| dx = \sum_{n=1}^m \int_{\frac{1}{n+1}}^{\frac{1}{n}} |f(x)| dx
=\sum_{n=1}^m \left| \frac{(-1)^n}{n} \right| = \sum_{n=1}^m \frac{1}{n}
$$
Since $\sum \frac{1}{n}$ diverges, $\lim_{m \ra \infty} \int_{\frac{1}{m+1}}^{1} |f(x)| dx$ diverges.
Therefore by Theorem 4.2 we have that $\lim_{c \ra 0} \int_c^1 |f(x)| dx$ does not exist.
The function $g(x) =\frac{1}{x} \sin \left( \frac{1}{x} \right)$ provides an example of a continuous function with the desired properties. The proof of this fact follows by the same type of argument, though due to the complexity of $g$ more approximations must be made as opposed to the direct calculations possible with the function $f$ described above.
\ee
\begin{ex}
\end{ex}
Let $f$ and $g$ be Riemann integrable on $[a,b]$.
Since $f,g$ are Riemann integrable they are both bounded on $[a,b]$. Fix $M$ such that $|f(x)| < M$ and $|g(x)| < M$ for all $x \in [a,b]$.
Fix $\ep > 0$.
Then by taking refinements know that there exists a partition $P=\{x_1, x_2, \ldots, x_n\}$ of $[a,b]$ such that $U(P,f) - L(P,f) \leq \ep/4M$ and $U(P,g) - L(P,g) \leq \ep/4M$.
For any function $h : [a,b] \ra \R$, let $M_i^h = \sup_{x \in [x_i, x_{i+1}]} h(x)$ and $m_i^h = \inf_{x \in [x_i, x_{i+1}]} h(x)$.
Then for $i=1,2, \ldots, n-1$, there exists points $y_i,z_i \in [a,b]$ such that $M_i^{f g} - f(y_i)g(y_i) < \ep/4(b-a)$ and $f(z_i)g(z_i) - m_i^{f g} < \ep/4(b-a)$.
Then we have
\begin{align*}
U(P,f g)-L(P,f g) &= \sum_{i=1}^{n-1} \left(M_i^{f g} - m_i^{f g} \right) (x_{i+1} -x_i) \\
&\leq \sum_{i=1}^{n-1} \left( f(y_i) g(y_i) - f(z_i) g(z_i) \right) (x_{i+1} -x_i) + 2(b-a)\cdot \frac{\ep}{4(b-a)} \\
&= \sum_{i=1}^{n-1} \left( f(y_i)g(y_i) - f(z_i) g(y_i) + f(z_i)g(y_i) - f(z_i) g(z_i) \right) (x_{i+1} -x_i) + \frac{\ep}{2}
\end{align*}
Breaking the sum into two pieces and factoring yields
\begin{align*}
U(P,f g)-L(P,f g) &\leq \sum_{i=1}^{n-1} g(y_i) \left( f(y_i) - f(z_i) \right) (x_{i+1} -x_i) + \sum_{i=1}^{n-1} f(z_i) \left( g(y_i) - g(z_i) \right) (x_{i+1} -x_i) + \frac{\ep}{2}\\
&\leq M \sum_{i=1}^{n-1} \left( M_i^f- m_i^f \right) (x_{i+1} -x_i) + M \sum_{i=1}^{n-1} \left( M_i^g - m_i^g \right) (x_{i+1} -x_i) + \frac{\ep}{2}\\
&= M ( U(P,f) - L(P,f)) - M( U(P,g) - L(P,g) ) + \frac{\ep}{2} \\
&\leq 2M \cdot \frac{\ep}{4M} + \frac{\ep}{2} = \ep
\end{align*}
Therefore $f g$ is Riemann integrable on $[a,b]$.
An alternative approach is to prove that for any Riemann integrable function $f$, $f^2$ is also Riemann integrable, and then to use this result to consider the product of two Riemann integrable functions.
\begin{lemma}
Let $f$ be a Riemann integrable function on $[a,b]$. Then $f^2$ is also Riemann integrable on $[a,b]$.
\end{lemma}
\begin{proof}
Fix any $\ep > 0$.
Since $f$ is Riemann integrable there exists $B$ such that $|f(x)|** 0$.
By taking refinements we can construct a partition $P=\{x_1,x_2, \ldots, x_n\}$ such that
$$ U(P,f^2) - \int_a^b f^2 dx < \ep \ \ \ \ \ \ \ U(P,g^2) - \int_a^b g^2 dx < \ep \ \ \ \ \ \ \ U(P,f g) - \int_a^b f g \ \ dx < \ep .$$
For $i=1,2, \ldots, n-1$, there exist points $y_i \in [a,b]$ such that $M_i^{f g} - f(y_i)g(y_i) < \ep/(b-a)$.
Then
\begin{equation*}
U(P,f g) = \sum_{i=1}^{n-1} M_i^{f g} (x_{i+1} - x_i) \leq \sum_{i=1}^{n-1} f(y_i)g(y_i) (x_{i+1} - x_i) + (b-a) \cdot \frac{\ep}{(b-a)}
\end{equation*}
Applying Theorem 1.35 with $a_i = f(y_i) \sqrt{x_{i+1} - x_i}$ and $b_i = g(y_i) \sqrt{x_{i+1} - x_i}$ yields
\begin{align*}
U(P,f g) &\leq \left( \sum_{i=1}^{n-1} f^2(y_i)(x_{i+1} - x_i) \right)^{1/2} \left( \sum_{i=1}^{n-1} g^2(y_i)(x_{i+1} - x_i) \right)^{1/2} + \ep \\
&\leq \left( \sum_{i=1}^{n-1} M_i^{f^2}(x_{i+1} - x_i) \right)^{1/2} \left( \sum_{i=1}^{n-1} M_i^{g^2}(x_{i+1} - x_i) \right)^{1/2} + \ep \\
&= ( U(P,f^2) )^{1/2} \cdot (U(P,g^2))^{1/2} + \ep
\end{align*}
Therefore we have
\begin{align}{\label{E4.5}}
\nonumber \int_a^b f g \ \ dx & \leq U(P,f g) + \ep \\
\nonumber &\leq ( U(P,f^2) )^{1/2} \cdot (U(P,g^2))^{1/2} + 2\ep \\
\nonumber &\leq \left( \int_a^b f^2 dx + \ep \right)^{1/2} \cdot \left(\int_a^b g^2 dx +\ep \right)^{1/2} + 2\ep\\
\nonumber &= \left[ \left( \int_a^b f^2 dx \right) \cdot \left(\int_a^b g^2 dx \right) + \ep \left( \int_a^b f^2 dx + \int_a^b g^2 dx \right) + \ep^2 \right]^{1/2} + 2 \ep \\
&\leq \left( \int_a^b f^2 dx \right)^{1/2} \cdot \left(\int_a^b g^2 dx \right)^{1/2} + \ep^{1/2} \left( \int_a^b f^2 dx + \int_a^b g^2 dx \right)^{1/2} + 3 \ep
\end{align}
The last inequality follows from the fact that the square root is a concave function, so for $p,q,r \geq 0$, $\sqrt{p+q+r} \leq \sqrt{p} + \sqrt{q} + \sqrt{r}$.
Since $\ep>0$ was arbitrary, (\ref{E4.5}) shows that $\int_a^b f g \ \ dx \leq \left( \int_a^b f^2 dx \right)^{1/2} \cdot \left(\int_a^b g^2 dx \right)^{1/2}$, as desired.
\begin{ex}
\end{ex}
Let $g : [a,b] \ra [c,d]$ be a differentiable function such that $g'$ is Riemann integrable on $[a,b]$.
Let $f$ be a continuous function on $[c,d]$.
Define $F(x) = \int_c^x f(t) d t$ for $x \in [c,d]$.
Then by Theorem 6.20 we know that $F$ is differentiable on $[c,d]$ with $F'(x)=f(x)$.
By the chain rule (Theorem 5.5), $F \circ g$ is differentiable on $[a,b]$ with $(F \circ g)'(x) = F'(g(x)) \cdot g'(x) = f(g(x)) \cdot g'(x)$.
Since $f,g$ are continuous, $f \circ g$ is continuous on $[a,b]$ and hence Riemann integrable.
Since $g'$ is also Riemann integrable on $[a,b]$, exercise 4.4 shows that $f(g(x)) \cdot g'(x)$ is Riemann integrable on $[a,b]$.
Then by the fundamental theorem of calculus (Theorem 6.21) we have
$$\int_a^b f(g(x)) \cdot g'(x) dx = F(g(b)) - F(g(a)) = \int_{g(a)}^{g(b)} f(x) dx,$$
which is the desired result.
\begin{ex}
\end{ex}
\begin{lemma}
Let $f,g \in \mathcal{R}$. Then $\| f + g \|_2 \leq \| f \|_2 + \| g \|_2$.
\end{lemma}
\begin{proof}
Since $\|u\|_2$ is non-negative for any Riemann integrable function $u$, it is sufficient to verify the square of the desired inequality. From the triangle inequality we have
$$ \| f + g \|_2^2
= \int_a^b |f + g|^2 dx
\leq \int_a^b \left( |f|^2 + 2 |f g| + |g|^2 dx \right) dx
= \|f\|_2^2 + 2\int_a^b |f g| dx + \| g \|_2^2.$$
Combining this with the Schwarz inequality yields
$$ \| f + g \|_2^2
\leq \|f\|_2^2 + 2 \left( \int_a^b |f|^2 dx \right)^{1/2} \left( \int_a^b |g|^2 dx \right)^{1/2} + \| g \|_2^2
= \|f\|_2^2 + 2 \|f\|_2 \cdot \|g\|_2 +\| g \|_2^2
= (\|f\|_2 + \|g\|_2)^2.$$
As noted above, this implies the desired result.
\end{proof}
Now let $f,g,h \in \mathcal{R}$.
Then by the above lemma we have
$$\|f - h\|_2 = \| (f-g) + (g-h) \|_2 \leq \| f-g \|_2 + \| g-h \|_2,$$
which is the desired inequality.
This shows that $\|f-g\|_2$ obeys the triangle inequality. However, this is not a metric on the set of Riemann integrable functions because there are non-zero Riemann integrable functions that have integral zero (see exercise 4.2). Since $\|f-g\|_2$ is non-negative, symmetric, and obeys the triangle inequality it is called a \emph{pseudo-metric} on $\mathcal{R}$. Using the ideas from Rudin problem 6.2 one can show that $\|f-g\|_2$ does give a metric on the set of continuous functions on $[a,b]$. When we study Lebesgue theory we will define $\| f-g \|_2$ as a metric on equivalence classes of Lebesgue square integrable functions, where two functions are equivalent if they disagree only on a set of measure zero.
\begin{ex}
\end{ex}
Let $f \in \R$. Fix $\ep > 0$. Let $M$ be a bound for $|f|$ on $[a,b]$.
Then we know that there exists a partition $P= \{x_1,x_2, \ldots ,x_n\}$ of $[a,b]$ such that $U(P,f)-L(P,f) < \ep^2/(2M)$.
Define $g:[a,b] \ra \R$ by
$$g(t) = \frac{x_{i+1}-t}{x_{i+1} - x_i} f(x_i) + \frac{t-x_i}{x_{i+1}-x_i} f(x_{i+1}) \ \ \ \ \ t \in [x_i,x_{i+1}].$$
Then $g$ is a piece-wise linear and continuous function.
On each interval $[x_i,x_{i+1}]$, the value of $g(t)$ is always between $f(x_i)$ and $f(x_{i+1})$.
So in particular, for all $t \in [x_i,x_{i+1}]$, $|f(t)-g(t)| \leq |M_i^f-m_i^f| \leq 2M$.
Therefore $0 \leq L(P,|f-g|) \leq U(P,|f-g|) \leq U(P,f)-L(P,f) < \ep^2/(2M)$.
So $\int_a^b |f-g| dx < \ep^2/(2M)$.
Since $|f-g|<2M$ on $[a,b]$, $\int_a^b |f-g|^2 dx \leq 2M \int_a^b |f-g| dx < 2M \cdot \ep^2/(2M) = \ep^2$.
Therefore $\|f-g\|_2 < \ep$, as desired.
This shows that the continuous functions are 'dense' in the set of Riemann integrable functions using the pseudo-metric $\|f-g\|_2$.
\chapter{Lebesgue Integration}
\section{Measure Spaces}
\begin{ex}
\end{ex}
Let $S$ be a set in $\R$.
Fix any $\ep > 0$.
If $m^*(S) = \infty$ take $G = \R$.
Then $S \subseteq G$ and $m^*(S) = m^*(G) = \infty$.
For the case where $m^*(S)$ is finite, by the definition of $m^*(S)$ there exists a sequence of elementary sets $\{A_n\}$ such that $S \subseteq \cup_{n=1}^{\infty} A_n$ and $m^*(S) + \ep/2 \geq \sum_{n=1}^{\infty} m(A_n)$.
Let $m_n$ be the number of intervals in the elementary set $A_n$.
Some of these intervals may be closed or half open.
For these intervals of the form $[a,b]$, $[a,b)$, or $(a,b]$, extend the interval by $\frac{\ep}{2^{n+2} m_n}$ on both sides to create a new open interval $(a-\frac{\ep}{2^{n+2} m_n} , b + \frac{\ep}{2^{n+2} m_n})$.
Let $B_n$ be the set $A_n$ with the expanded intervals.
Then $B_n \supseteq A_n$, $B_n$ is a union of open intervals, hence open, and $m^*(B_n) \leq m(A_n)+2 m_n \cdot \frac{\ep}{2^{n+2} m_n} = m(A_n)+ \frac{\ep}{2^{n+1}}$.
Let $B = \cup_{n=1}^{\infty} B_n$.
Then $B$ is open and $S \subseteq \cup_{n=1}^{\infty} A_n \subseteq \cup_{n=1}^{\infty} B_n = B$.
Since $S \subseteq B$ we know that $m^*(S) \leq m^*(B)$ by Theorem 5.1(e).
From above and using a geometric series we have
$$m^*(B) \leq \sum_{n=1}^{\infty} m^*(B_n) \leq \sum_{n=1}^{\infty} \left(m(A_n) + \frac{\ep}{2^{n+1}}\right) = \sum_{n=1}^{\infty} m(A_n) + \sum_{n=1}^\infty \frac{\ep}{2 \cdot 2^{n}} \leq m^*(S) + \frac{\ep}{2} + \frac{\ep}{2} = m^*(S) + \ep.$$
So we can find an open set containing $S$ with measure arbitrarily close to that of $S$.
\begin{ex}
\end{ex}
Let $(X,m)$ be a measure space.
Let $\{E_k\}_{k=1}^{\infty}$ be a collection of measurable sets in $X$ such that $\sum_{k=1}^{\infty} m(E_k) =M < \infty$.
Let $E = \{ x \in X : x \in E_k \mbox{ for infinitely many $k$ }\}$.
I claim that $E = \cap_{n=1}^{\infty} \left( \cup_{k>n} E_k \right)$.
\begin{align*}
x \in E &\iff \mbox{ $x$ is in infinitely many of the $E_k$} \\
&\iff \mbox{ For any $n \in \N$ there exists $k>n$ such that $x \in E_k$ }\\
&\iff \mbox{ $x \in \cup_{k>n} E_k$ for every $n \in \N$ }\\
&\iff x \in \cap_{n=1}^{\infty} \left( \cup_{k>n} E_k \right).
\end{align*}
Since the $E_k$ are all measurable, for each $n \in \N$ we know that $\cup_{k>n} E_k$, as a countable union of measurable sets, is measurable.
Then $E = \cap_{n=1}^{\infty} \left( \cup_{k>n} E_k \right)$ is a countable intersection of measurable sets and therefore measurable.
We now show that $m(E) = 0$.
Fix any $\ep > 0$.
Then there exists $N \in \N$ such that for $n > N$, $|M - \sum_{k=1}^n m(E_k)| < \ep$.
So for $n>N$, $\sum_{k=n+1}^{\infty} m(E_k) < \ep$.
Therefore $m(\cup_{k>n} E_k) \leq \sum_{k>n} m(E_k) < \ep$ for $n > N$.
Since $E = \cap_{n=1}^{\infty} \left( \cup_{k>n} E_k \right)$ we have that $m(E) \leq m(\cup_{k>n} E_k)$ for every $n \in N$.
Therefore we have $m(E) < \ep$ for any $\ep>0$.
Since we know $m(E) \geq 0$, this implies that $m(E) = 0$ ,as desired.
\begin{ex}
\end{ex}
Let $E \in \mathcal{E}$ and $A \subseteq \R$.
Fix any $\ep > 0$.
Then there exists a sequence of elementary sets $\{A_n\}$ such that $A \subseteq \cup_{n=1}^{\infty} A_n$ and $m^*(A) \geq \sum_{n=1}^{\infty} m(A_n) - \ep$.
For any $n \in \N$, since $A_n,E \in \mathcal{E}$ we know that $A_n \cap E \in \mathcal{E}$ and $A_n \backslash E \in \mathcal{E}$.
Since $A_n = (A_n \cap E) \cup (A_n \backslash E)$ and this union is disjoint we know that $m(A_n) = m(A_n \cap E) + m(A_n \backslash E)$.
Further, $(A \cap E) \subseteq \cup_{n=1}^{\infty} (A_n \cap E)$, so by the definition of $m^*$ we know that $m^*(A \cap E) \leq \sum_{n=1}^{\infty} m(A_n \cap E)$.
Similarly we know that $M^*(A \backslash E) \leq \sum_{n=1}^{\infty} m(A_n \backslash E)$.
Combining these results we have
$$m^*(A) \geq \sum_{n=1}^{\infty} m(A_n) - \ep
= \sum_{n=1}^{\infty} \left( m(A_n \cap E) + m(A_n \backslash E) \right) - \ep
\geq m^* (A \cap E) + m^* (A \backslash E) - \ep.$$
Since $\ep>0$ was arbitrary, this implies that $m^*(A) \geq m^* (A \cap E) + m^* (A \backslash E)$, as desired.
\section{The Lebesgue Integral}
\begin{ex}
\end{ex}
Let $f \in M^+(\R,\mathcal{L})$.
For $n \in \N$ and $k=0,1,2,\ldots, n2^n-1$ define
$$ E_{n,k} = f^{-1} \left( \left[\frac{k}{2^n}, \frac{k+1}{2^n} \right) \right),$$
For $k=n2^n$ define $E_{n,k} = f^{-1} \left( [n, \infty] \right).$
Then for $n \in \N$ let
$$\phi_n = \sum_{k=0}^{n2^n} \frac{k}{2^n} \cdot \chi_{E_{n,k}} $$
Theorem 5.3 combined with the fact that intersections of measurable sets are measurable shows that the sets $E_{n,k} \in \mathcal{L}$.
For any fixed $n \in \N$, since the intervals $\{ [k2^{-n}, (k+1)2^{-n}) \}_{k=0}^{n 2^n -1} \cup \{[n,\infty]\}$ are disjoint and cover the entire range of $f$, the sets $\{E_{n,k} \}_{k=1}^{n 2^n}$ will be disjoint and cover all of $\R$.
For any fixed $n$ each of the coefficients $k2^{-n}$ are distinct for different $k$ values.
If any of the $E_{n,k}$ are empty we can remove them from the defining sum for $\phi_n$ without changing the value of $\phi_n$.
Therefore, for each $n \in \N$, $\phi_n$ can be written as a finite linear combination of characteristic functions of disjoint non-empty measurable sets with distinct coefficients, so $\phi_n$ is a simple function.
\begin{enumerate}
\litlet
\item Fix any $x \in \R$ and any $n \in \N$.
We consider two cases.
\begin{itemize}
\item $x \in E_{n,k}$ for some $k \in \{0,1, \ldots n2^n-1\}$.
\newline Since $[k 2^{-n}, (k+1) 2^{-n} ) = [2k 2^{-(n+1)} , (2k+1) 2^{-(n+1)} ) \cup [(2k+1) 2^{-(n+1)} , (2k+2) 2^{-(n+1)})$ we know that $x \in E_{n+1,2k} \cup E_{n+1,2k+1}$.
\newline So $\phi_{n+1}(x) \in \{2k 2^{-(n+1)},(2k+1) 2^{-(n+1)} \} = \{k 2^{-n} , (k + 1/2) 2^{-n} \}$.
\newline Therefore $\phi_n (x) = k 2^{-n} \leq \phi_{n+1} (x)$.
\item $x \in E_{n,n2^n}$.
If $f(x) = \infty$ then $\phi_n(x) = n < n+1 = \phi_{n+1}(x)$.
Otherwise $x \in [k 2^{-n}, (k+1) 2^{-n})$ for some integer $k \geq n2^n$.
\newline So by the argument of the above case $x \in [l 2^{-(n+1)}, (l+1) 2^{-(n+1)})$ for $l \in \{2k, 2k+1\}$.
\newline If $l \geq (n+1) 2^{-(n+1)}$ then $\phi_{n+1} (x) = (n+1) > n = \phi_n (x)$.
\newline If $l < (n+1) 2^{-(n+1)}$ then $\phi_{n+1} (x) = l2^{-(n+1)} \geq 2k 2^{-(n+1)} = k2^{-n} \geq n = \phi_n (x)$.
\end{itemize}
So in all cases we have $\phi_n(x) \leq \phi_{n+1} (x)$ for all $x \in \R$ and all $n \in \N$.
Since all of the coefficients $k 2^{-n}$ are non-negative, all of the $\phi_n$ are non-negative.
So $0 \leq \phi_n \leq \phi_{n+1}$ for all $n \in \N$.
\item
Fix any point $x \in X$ and any $\ep > 0$.
If $f(x) = \infty$ then $\phi_n(x) = n$ for all $n \in \N$, so $\lim_{n \ra \infty} \phi_n(x) = f(x)$.
If $f(x) < \infty$, fix $N \in \N$ such that $f(x) < N$ and $\frac{1}{2^N} < \ep$.
Now fix any $m > N$.
There exists $j_m \in \{0,1,2,\ldots,m2^m-1\}$ such that $x \in E_{m,j_m}$.
Then $\phi_m(x) = j_m 2^{-m}$ and $f(x) \in [j_m 2^{-m},(j_m+1) 2^{-m})$.
Therefore $|f(x)-\phi_n(x)| \leq (j_m+1) 2^{-m} - j_m 2^{-m} = 2^{-m} < \ep$.
Since $x \in X$ and $\ep>0$ were arbitrary, this shows that $\phi_n(x)$ converges point-wise to $f(x)$.
\end{enumerate}
\begin{ex}
\end{ex}
$\Rightarrow$ Assume that $\int f d m =0$. For $n \in \N$, let $A_n = \{x \in \R : f(x) > 1/n\}$.
Since $f$ is measurable the $A_n$ are measurable by Theorem 5.3.
For each $A_n$ we can define the simple function $\phi_n = \frac{1}{n} \chi_{A_n} + 0 \cdot \chi_{A_n^c}$.
Then $\phi_n \leq f$ and $\int \phi_n \, d m = \frac{1}{n} \cdot m(A_n) \geq 0$.
Then by the definition of the integral $\frac{1}{n} m(A_n) \leq \int f d m = 0$.
So $m(A_n) = $ for all $n \in \N$.
For $x \in \R$, $f(x) > 0$ if and only if $x \in A_n$ for some $n \in N$, so $A = \cup_{n=1}^{\infty} A_n$.
Therefore $m(A) \leq \sum_{n=1}^{\infty} m(A_n) = 0$.
Since $m(A)$ must be non-negative this implies $m(A) = 0$.
$\Leftarrow$ Assume that $m(A) = 0$.
Let $\phi$ be any simple function such that $0 \leq \phi \leq f$.
Then $\phi$ must be zero on $A^c$.
Therefore, $\phi = \sum_{i=1}^n a_i \chi_{E_i}$ where there is one $j \in \{1,2, \ldots, n\}$ such that $a_j = 0$ and $A^c \subseteq E_j$.
Then for $i \neq j$, $E_j \subseteq A$, so $m(E_j) = 0$.
Therefore $\int \phi \, d m = 0$, which implies that $\int f d m = 0$.
\begin{ex}
\end{ex}
\begin{lemma}
Let $D$ be a measurable subset of $\R$. Let $f : D \ra \R$ be measurable and let $g : D \ra \R$ be a function such that the set $A := \{x \in D : f(x) \neq g(x) \}$ has measure zero. Then $g$ is also measurable.
\end{lemma}
\begin{proof}
Let $E$ be any set contained in $A$.
Then $0 \leq m^*(E) \leq m^*(A) = 0$, so $m^*(E) = 0$.
Let $B \subseteq \R$. Then $(B \cap E) \subseteq E$, so $0 \leq m^*(B \cap E) \leq m^*(E) = 0$, so $m^*(B \cap E) = 0$.
Since $(A \backslash E) \subseteq A$, $m^*(A \backslash E) \leq m^*(A)$.
Therefore $m^*(A) \geq m^*(A \cap E) + m^*(A \backslash E)$.
From the countable subadditivity of $m^*$ (Theorem 5.1 (e)) we know that $m^*(A) = m^*((A \cap E) \cup (A \backslash E)) \leq m^*(A \cap E) + m^*(A \backslash E)$.
Therefore $m^*(A) = m^*(A \cap E) + m^*(A \backslash E)$, so $E$ is measurable.
Now let $\alpha \in \R$ and consider the sets $G := g^{-1} ((\alpha, \infty])$ and $F := f^{-1} ((\alpha, \infty])$.
Then $G = (F \cup A_1) \cap A_2^c$ where $A_1 = \{x \in \R : g(x) > \alpha, f(x) \leq \alpha \}$ and $A_2 = \{ x \in \R : g(x) \leq \alpha, f(x) > \alpha\}$.
Then $A_1$ and $A_2$ are both contained in $A$.
Since $A$ has measure zero, by the above argument, $A_1$ and $A_2$ are both measurable.
Since $f$ is a measurable function, $F$ is measurable.
Since unions, intersections, and complements of measurable sets are measurable, this implies that $G$ is measurable.
Therefore $g$ is a measurable function.
\end{proof}
From Theorem 4.2 we know that for every $n \in \N$ there exists a partition $P_n'$ such that $U(P_n',f) - L(P_n',f) \leq 1/n$.
Let $P_1 = P_1'$ and then inductively define $P_n = P_{n-1} \cup P_n'$.
Since $P_n$ is a refinement of $P_n'$ we know that $U(P_n,f) - L(P_n, f) \leq 1/n$.
Let $P_n = \{x_{0,n}, x_{1,n}, \ldots , x_{k_n,n}\}$.
Define $I_{n,i} = [x_{n,i},x_{n,i+1})$ for $n \in \N$, $i \in \{0,1,\ldots,k_n-1\}$.
Let $M_{n,i} = \sup\{f(x) : x \in \overline{I_{n,i}}\}$ and $m_{n,i} = \inf \{f(x) : x \in \overline{I_{n,i}} \}$.
Define $L_n = \sum_{i=1}^{k_n} m_{n,i} \cdot \chi_{I_{n,i}}$ and $U_n = \sum_{i=1}^{k_n} M_{n,i} \cdot \chi{_{I_{n,i}}}$
Then the $L_n$ and the $U_n$ are both simple functions (some of the coefficients may be equal, but in these cases we can just combine the appropriate intervals $I_{n,i}$).
For $n \in \N$, since $P_{n+1}$ is a refinement of $P_{n}$ we know that $L_{n+1} \geq L_n$ and $U_{n+1} \leq U_n$ (this follows from the argument used in the proof of Lemma 4.1).
Since $f$ is Riemann integrable it is bounded above by some number $M$ and below by some number $m$.
Then the $L_n$ are non-decreasing and uniformly bounded and hence converge point-wise to some function $L$.
Similarly, since the $U_n$ are non-increasing and uniformly bounded they converge point-wise to some function $U$.
Since $U$ and $L$ are point-wise limits of measurable functions, $U$ and $L$ are both measurable.
From the monotone convergence theorem we know that
\begin{align*}
\int_{[a,b]} L d m = \int_{[a,b]} (L - m) d m + m(b-a)
&=\lim_{n \ra \infty} \int_{[a,b]} (L_n - m) d m + m(b-a)\\
&=\lim_{n \ra \infty} \int_{[a,b]} L_n \, d m\\
&= \lim_{n \ra \infty} L(P_n,f)
=\int_a^b f(x) dx.
\end{align*}
We can also apply the monotone convergence theorem to the $U_n$ to derive
\begin{align*}
\int_{[a,b]} U d m = - \int_{[a,b]} (M - U) d m + M(b-a)
&=\lim_{n \ra \infty} -\int_{[a,b]} (M - U_n) d m + M(b-a)\\
&=\lim_{n \ra \infty} \int_{[a,b]} U_n \, d m\\
&= \lim_{n \ra \infty} U(P_n,f)
=\int_a^b f(x) dx.
\end{align*}
Therefore $\int_{[a,b]}(U-L) d m = \int_a^b f(x) dx - \int_a^b f(x) dx = 0$.
From our construction $L_n(x) \leq f(x) \leq U_n(x)$ for all $x \in [a,b]$ and $n \in \N$.
Therefore $L(x) \leq f(x) \leq U(x)$ for all $x \in [a,b]$, so $U-L$ is a non-negative function.
Then by the previous exercise, if $A := \{x \in [a,b] : U(x) \neq L(x)\}$ we have $m(A) = 0$.
Also, if $L(x) = U(x)$ this implies that $L(x) = f(x)$.
Therefore $L(x) = f(x)$ for all $x \in A^c$.
Then by the above lemma, since $L$ is measurable, $f$ is also mesurable.
Since $(f-L) = 0$ except on a set of measure zero, by the previous exercise $\int_{[a,b]} (f-L) d m = 0$.
So $\int_{[a,b]} f d m = \int_{[a,b]} L \, d m = \int_a^b f(x) dx$, as desired.
\end{document}
**