使用枚举的证明结束出现在下一行

使用枚举的证明结束出现在下一行
\documentclass[11pt, a4paper]{report}
% \usepackage{eurosym}% you probably don't need this (most fonts have euro)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% \usepackage{amsmath} % you load this below
%%\usepackage{amsfonts} % you load this below
\usepackage{bm}
\usepackage{amsfonts, graphicx, verbatim, amsmath,amssymb}
\usepackage{color}
% \usepackage{lipsum} % only for demos
\usepackage{array}
\usepackage{setspace}% if you must (for double spacing thesis)
\usepackage{fancyhdr}
\usepackage{enumitem}
\usepackage{tikz}
\usepackage{parskip}


% These are sort of OK, but better to use geometry package
% to set a consistent set of page dimensions
\setlength{\textheight}{22cm}\setlength{\textwidth}{16cm}
\setlength{\topmargin}{-1.5cm}
\setlength{\oddsidemargin}{-0.5cm}\setlength{\evensidemargin}{-0.5cm}
\setlength{\textheight}{24cm}\setlength{\textwidth}{16.5cm}
\setlength{\topmargin}{-1.5cm}
\setlength{\oddsidemargin}{-0.1cm}\setlength{\evensidemargin}{-0.1cm}


% This discards its argument, is that intended?
% \U{wibble}  is same as \U{zzzzz}
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}


\newtheorem{theorem}{Theorem}[section]
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{Theorem}[theorem]{Theorem}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{Fact}[theorem]{Fact}

% Why all these variant corollary forms?
\newtheorem{corollary}[theorem]{Corollary}
%\newtheorem{corol}[theorem]{Corollary}
%\newtheorem{Corollary}[theorem]{Corollary}

\newtheorem{criterion}[theorem]{Criterion}

% why variant definition forms?
\newtheorem{definition}[theorem]{Definition}
%\newtheorem{Definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
% as above
\newtheorem{lemma}[theorem]{Lemma}
%\newtheorem{Lemma}[theorem]{Lemma}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{lma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
% as above
\newtheorem{proposition}[theorem]{Proposition}
%\newtheorem{prop}[theorem]{Proposition}
%as above
\newtheorem{Property}[theorem]{Property}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{Comment}[theorem]{Comment}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}

\newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\newcommand{\ve}{\varepsilon}

\newcommand{\cvgpr}{\xrightarrow{\text{\upshape\tiny P}}}
\newcommand{\cvgdist}{\xrightarrow{\mathrm{d}}}
\newcommand{\G}{{\mathcal{G}}}

\newcommand{\ls}{\limsup_{n\to\infty}}
\newcommand{\rE}{\mathbb{E}}
\newcommand{\A}{{\mathcal{A}}}
\newcommand{\rP}{\mathbb{P}}
\newcommand{\p}{{\mathbb{P}}}
\newcommand{\Z}{{\mathbb{Z}}}

% \mathrm{Be} not {\rm BeK}  %\cal not defined by default since 1993
%\newcommand{\Be}{{\rm Be}}
\newcommand{\re}{\mathrm{e}}
\newcommand{\ep}{\varepsilon}
%\newcommand{\Bin}{{\rm Bin}}
\newcommand{\qand}{\quad\mbox{and}\quad}
\newcommand{\quso}{\quad\mbox{so}\quad}
%\newcommand{\Nn}{{\bf N}}
%\newcommand{\St}{\underline{\rm S}}
%\newcommand{\Rt}{\underline{\rm R}}
%\newcommand{\It}{\underline{\rm I}}
%\newcommand{\one}{{\bf 1}}
\newcommand{\Ups}{{\Upsilon}}
\newcommand{\iu}{{i\mkern1mu}}
\newcommand{\II}{{\mathcal{I}}}
%\newcommand{\Var}{{\rm Var}}
%\newcommand{\var}{{\rm Var}}
%\newcommand{\Cov}{{\rm cov}}
%\newcommand{\cov}{{\rm cov}}
%\newcommand{\corr}{{\rm corr}}
%\newcommand{\lhs}{{\rm lhs}}
%\newcommand{\rhs}{{\rm rhs}}
\newcommand{\ra}{\rightarrow}
\newcommand{\I}{{\mathbf 1}}
\newcommand{\R}{{\mathbb R}}
\newcommand{\N}{{\mathbb N}}
\newcommand{\LL}{{\mathbb L}}
\newcommand{\E}{{\mathbb{E}}}
%\newcommand{\bin}{{\rm Bin}}
%\newcommand{\Pois}{{\rm Pois}}
%\newcommand{\Po}{{\rm Pois}}
%\newcommand{\Bi}{{\cal B}}
\newcommand{\ri}{\mathrm{i}}
\newcommand{\rd}{\mathrm{d}}
\newcommand{\XXi}{\Xi_{k,m}^{(n)}}
\newcommand{\xxi}{\bar{\xi}}
\newcommand{\qedhere}{{\diamond}}
\newcommand{\eqdef}{\stackrel{\mathrm{def}}{=}}
\newcommand{\eqdist}{\stackrel{\mathrm{D}}{=}}
\newcommand{\braket}[2]{{\langle{#1|#2}\rangle}}
\newcommand{\independent}{\perp}

% use amsmath package (that you have loaded) align environment, not eqnarray
%\newcommand{\bb}{\begin{eqnarray*}}
%\newcommand{\ee}{\end{eqnarray*}}
%\newcommand{\bbb}{\begin{eqnarray}}
%\newcommand{\eee}{\end{eqnarray}}
\newcommand{\F}{{\mathcal{F}}}
\newcommand{\qed}{$\blacksquare$}
\newcommand{\cross}{\mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.075ex] \draw (0,0) -- (1,1) (0,1) -- (1,0);}}%
% \parindent 0pt % this is just low level version of following line
% \setlength{\parindent}{0pt}% 
% use parskip package (if you must) to stop indent and put vertical space betwen paragraphs
% although most documents lok better with traditional typesetting with indentation and no vertical space


%\newcommand{\forceindent}{\leavevmode{\parindent=3em\indent}}%eek



%\input{tcilatex}


\begin{document}
\newpage
\pagestyle{fancy}
\fancyhf{}
\fancyhead[EL]{\nouppercase\leftmark}
\fancyhead[OR]{\nouppercase\rightmark}
\fancyhead[ER,OL]{\thepage}
\title{Introduction to Representation theory}
\chapter{Introduction}
%\bigskip
%\smallskip
\section{Definitions and prerequisites}
Let us recall some basic knowledge, we begin by giving some definitions. 
%\medskip
\begin{definition} 
A group is a set $G$ together with a binary operation $*$ on $G$ satisfying
the following properties:
\begin{enumerate}[label=(G\arabic*),series=group]
\item Closure: $\forall x,y \in G, x * y \in G$.
\item Associativity: $\forall x,y, z \in G, (x * y) * z = x * (y * z)$.
\item Identity: There is an element $e \in G$ such that $e * x = x * e = x$ for all $x \in G$.
\item Inverses: For any $x \in G$ there is an element $y \in G$ such that $x * y = y * x = e$.
\end{enumerate}
\end{definition}

\begin{definition}
A group $G$ is called an abelian group if the following axiom is satisfied:
\begin{enumerate}[label=(G\arabic*),resume=group]
\item Commutativity: $\forall x,y \in G, x * y = y * x$.
\end{enumerate}
\end{definition}

Let $V$ be a vector space over the field $\mathbb{C}$ (unless stated
otherwise) of complex numbers and let $GL(V)$ be the group of
\textit{isomorphisms} of $V$ onto itself. An element $\alpha$ of
$GL(V)$ is, by definition, a linear mapping of $V$ into $V$ which has
an inverse $\alpha^{-1}$; this inverse is linear. When $V$ has a
finite basis $\lbrace e_{i} \rbrace$ of $n$ elements, each linear map
% colon
$\alpha\colon V \rightarrow V$ is defined by a square matrix
$(\alpha_{ij})$ of order $n$. The coefficients $\alpha_{ij}$ are
complex numbers; they are obtained by expressing the images
$\alpha(e_j)$ in terms of the basis $\lbrace e_{i} \rbrace$:
% $$ is not latex (and \medskip here will break tex's space insertion
\[
\alpha(e_j)=\sum_i \alpha_{ij}e_i
\]
Saying that $\alpha$ is an isomorphism is identical to saying that the determinant
% never use math italic for words use \det which is the right font, with operator spacing
$\det(\alpha) = \det(\alpha_{ij})$ of $\alpha$ is non zero. The group $GL(V)$
% en-dash not hyphen
 -- 
the general linear group on $V$ is thus the group of \textit{invertible (or non-singular) square matrices of order $n$}.
% almost never use \\ in text (and really never use it at an end of paragraph as here)
\begin{definition}Suppose now
  $G$ is a finite group, with identity element
  $1$ and with composition $(s,t) \rightarrow
  st$. A linear representation of $G$ in $V$ is a homomorphism
  $\rho$ from the group $G$ into the group $GL(V)$
  (i.e. $\rho\colon G  \rightarrow
  GL(V)$). In other words, we associate with each elements $s \in
  G$ and element $\rho(s)$ of
  $GL(V)$ in such a way that we have the equality:
\[\rho(st)=\rho(s)\cdot\rho(t) \text{ for } s,t \in G. \]
Often we will use $\rho_s$ as an alternate to $\rho(s)$.
We notice the above formula implies the following:
\[\rho(1)=1,\quad \rho(s^{-1})=\rho(s)^{-1}.\]
\end{definition}
% eek\medskip\
%\\

When $\rho$ is given, we simply say, by abuse of language,
$V$ is a \textit{representation} of $G$ (or formally
$V$ is a \textit{representation space} of
$G$). Let us impose the condition that we are only to consider
representations of finite degree, that is, in
\textit{finite-dimensional} vector spaces; and these will almost
always be vector spaces over
$\mathbb{C}$. Therefore, to avoid repetition, let us agree to use the
term 
% never use " in tex sourcde `` and ''
``representation'' to mean representation of finite degree over
$\mathbb{C}$, unless stated otherwise. This isn't a strict limitation
as for almost all applications we are interested in finite number of
elements $x_i$ of
$V$ hence we can find a a \textit{subrepresentation}.
%\\

Suppose now that $G$ has finite dimension, and let $n$ be its
dimension; we say also that $n$ is the degree of the
representation. Let $\lbrace e_{i} \rbrace$ be a basis of $V$, and let $R_s$ be the
matrix of $\rho_s$ with respect to this basis. We have:
%\\
\[
\det(R_s)\neq 0, \quad R_{st} = R_s \cdot R_t \quad \text{if }s,t \in G
\]

Let $r_{ij}(s)$ denote the coefficients of the matrix $R_s$, the second formula becomes:
\[r_{ik}(st)= \sum_j r_{ij}(s)\cdot r_{jk}(t)\]

Conversely, given the invertible matrices $R_s = (r_{ij}(s))$
satisfying the preceding identities, there is a corresponding linear
representation of $\rho$ of $G$ in $V$; this is what we mean by giving
a representation in matrix form.

\begin{definition}
  let $\rho$ and $\rho\,'$ be two representations of the same group
  $G$ in vector spaces $V$ and $V\,'$. These representations are said
  to be similar (or isomorphic) if there exits a linear homomorphism
  $\tau: V \rightarrow V\,'$ which ``transforms'' $\rho$ to $\rho\,'$,
  that is which satisfies the identity:% no\
\[\tau \circ \rho(s) = \rho(s)\,'\circ \tau \quad \forall s \in G\]
\end{definition}

When $\rho$ and $\rho\,'$ are given in matrix form by $R_s$ and
$R_s'$, this means that that there exists an invertible matrix $T$
such that:% \smallskip
\[T\cdot R_s = R_s' \cdot T \quad \forall s \in G\]

\section{Example - The Representations of $S_3$}
Let us consider the representations of $S_3$ - The group of permutations of $3$ letters, or the symmetries of the regular triangle. Without going into details, this group is generated entirely by two elements, the permutation of $(123)$ and the permutation $(12)$.

For simplicity let
\[ 
\beta = (123), \quad \gamma = (12)
\] 
I will leave it to the reader to check that the other elements of the group $S_3$ are indeed generated by $\beta$ and $\gamma$ by the following:
\[ 
\beta^3 = e, \quad  \beta^2 = (132), \quad \beta\gamma = (13), \quad \text{and } \beta^2\gamma = (23) 
\]
\begin*{\textit{Remark}}:
$\beta\gamma$ is in composition of functions notation - i.e. you do $\gamma$ first then $\beta$.
\end*{}

We may ask, what classifies as a valid representation? Well, because it is a homomorphism, the representation must encapsulate the group structure of the symmetry group. In other words, if we have a homomorphism $\rho$ that associates $\beta$ to the matrix $M$, then $M^3 = I$ in order to preserve the group structure. As this is not a isomorphism, it is not necessary that $M$ have the same order as $\beta$.

Now one such representation is the \textit{trivial} one, where we send both $\beta$ and $\gamma$ to the identity transformation. This is only $1$-dimension(it acts on a $1$-dimensional vector space). We call this the \textit{trivial representation}, I:
\[
I_\beta = [1], \quad    I_\gamma = [1]
\] 

Another possible representation is the \textit{alternating representation} which associates each element of the group its \textit{sign}. To define it, the sign of a permutation $\pi$ is $+1$ if $\pi$ can be written as a product of an even number of transpositions, and, the sign of $\pi$ is $-1$ if it can be written as an odd number of transpositions. We know that $sgn(\pi)$ is well defined. We also know the $sgn(\beta) = +1$ and $sgn(\gamma) = -1$, then:
\[
A_\beta = [1], \quad    A_\gamma = [-1]
\]

It follows that the alternating representation is also a $1$-dimensional representation.

Now we could think of these as rotations and flips and associate to each their rotation matrix. Let $\theta = \frac{2\pi}{3}$ then the \textit{standard representation}, which we call $T$ would be:
\[ T_\beta  = 
\begin{bmatrix} 
        \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta 
\end{bmatrix}, 
\quad
T_\gamma    = 
\begin{bmatrix} 
        -1 & 0 \\ 0 & 1 
\end{bmatrix}
\]   

The above three representations are known as the \textit{irreducible representations of $S_3$}, and they are in fact the only irreducible representations of $S_3$. We will define the notion of \textit{irreducibility} a little further down (section 1.4), but we can "think" of them as the foundation of all other representations. In other words, all representations can be built upon these.

We can of course form other representations such as the \textit{permutation representation}. Suppose our group $G$ acts on a finite set $X$. This means that for each s $\in G$, there is a given permutation $x \mapsto sx$ of $X$, satisfying the equalities:
\[
1x = x, \quad s(tx)=(st)x \quad \text{if }s,t \in G, x \in X
\] 

Let $V$ be a vector space having basis $(e_x)_{x\in X}$ indexed by the elements of $X$. for $s \in G$ let $\rho_s$ be the linear map of $V$ into $V$ which sends $(e_x)$ to $e_{sx}$; the linear representation of $G$ thus obtained is called the permutation representation.

\section{Subrepresentations}
Let $\rho: G \rightarrow GL(V)$ be a linear representation and let $W$ be a vector subspace of $V$. Suppose that $W$ is \textit{stable} under the action of $G$ (also known as $G$-stable or invariant), or in other words, suppose that $x\in W$ implies $\rho_s(x) \in W$ for all $s \in G$. The restriction $\rho_s^w$ of $\rho_s$ to $W$ is then an isomorphism of $W$ onto itself, and we have $\rho_{st}^w = \rho_s^w \cdot \rho_t^w$. Thus $\rho^w: G \rightarrow GL(V)$ is a linear representation of $G$ in $W$; $W$ is said to be a \textit{subrepresentation} of $V$.

Before considering the following example, we will need to define the notion of a \textit{direct sum}. Let $V$ be a vector space, and let $W$ and $\tilde{W}$ be two subspaces of $V$. The vector space $V$ is said to be the \textit{direct sum} of $W$ and $\tilde{W}$ if each $x \in V$ can be written uniquely in the form $x=w+\tilde{w}$ with $w \in W$ and $\tilde{w} \in \tilde{W}$. This is equivalent to saying that the intersection of $W$ and $\tilde{W}$ is $0$ which implies $\tilde{W}$ is the complement of $W$ in $V$. We also have the property that $\dim(V) = \dim(W)+ \dim(\tilde{W})$. We, therefore write $V = W \oplus \tilde{W}$.

\begin{example}
Let $V$ be a complex vector space in n-dimensions, namely $\mathbb{C}^n$ with the standard basis $\left\lbrace e_1,..., e_n \right\rbrace$ and for $\pi \in S_n$, let $\rho(\pi)$ be the permutation matrix defined by $\rho(\pi)e_i = e_{\pi(i)}$. Now $V$ is a representation of $S_n$. The subspace $W$ spanned by $\left\lbrace (1,..., 1) \right\rbrace$ is a subrepresentation. This is what we defined earlier as the trivial representation, such that $\pi x = x$ for all $x \in W, \pi \in S_n$.
The subspace $W^\perp = \left\lbrace (x_1,..., x_n)|\sum{x_i}=0 \right\rbrace$ is also a subrepresentation. note that $W$ and $W^\perp$ are complementary to each other, in the sense that we can write $V = W \oplus W^{\perp}$.
\end{example}

\begin*{\textit{Remark}}: 
We say that $V$ is the direct sum of $W$ and $W^\perp$ and write $V=W\oplus W^\perp$. An element of $V$ is identified with a pair $(w,w^\perp)$ with $w\in W$ and $w^\perp \in W^\perp$. Suppose now $W$ and $W^\perp$ are given in matrix form by $R_s$ and $R_s^\perp$ so $W\oplus W^\perp$ is given in matrix form:
 \[ 
\begin{bmatrix} 
        R_s & 0 \\ 0 & R_s^\perp 
\end{bmatrix}
\]
\end*{}

Returning to subrepresentations, we introduce the following theorem:

\begin{theorem}
Let $\rho : G \rightarrow GL(V)$ be a linear representation of $G$ in $V$ and let $W$ be a vector subspace of $V$ which is also $G$-stable. Then there exists a complement $W^\perp$ of $W$ in $V$ which is also $G$-stable.
\end{theorem}

\begin{proof}
Let $<,>_1$ be a scalar product on $V$. Let us define a new inner product by $<u,\hspace{0.05in} v >\hspace{0.05in} =\hspace{0.05in} \sum_{s\in G} <\rho_s u,\hspace{0.05in} \rho_s v >_1$. Then $<, >$ is invariant: $<\rho_s u,\hspace{0.05in} \rho_s v >\hspace{0.05in} =\hspace{0.05in} <u,\hspace{0.05in} v >$. The orthogonal complement of $W$ in $V$ serves as $W^\perp$. 
\end{proof}

\begin*{\textit{Remark 1}}: 
The complement may not necessarily be unique, consider the following example. Let $S_n$ act on $\mathbb{R}^2$ by $\rho(\pi)(x,y) = sgn(\pi)(x,y)$. The subspace $W = \left\lbrace (x,y):\hspace{0.05in} x=y \right\rbrace$ is invariant. Its complement, under the usual inner product is $W^\perp = \left\lbrace (x,y):\hspace{0.05in} x=-y \right\rbrace$ is also invariant. In this case, the complement is not unique as for example $W^{\perp\perp} = \left\lbrace (x,y):\hspace{0.05in} 2x=-y \right\rbrace$ is also an invariant complement.
\end*{}

\begin*{\textit{Remark 2}}: 
The invariance of the scalar product $<, >$ means that if $e_i$ is chosen as an orthonormal basis with respect to $<, >$ then $<\rho_s e_i,\hspace{0.05in} \rho_s e_j > \hspace{0.05in} = \hspace{0.05in} \delta_{ij}$. It follows that the matrices $\rho_s$ are unitary.
\end*{}

\begin*{\textit{Remark 3}}: 
The above theorem can fail for finite fields, it can also fail for non-compact groups. For example, consider $G$  as the reals under the operation of addition. Now let $V$ be the set of linear polynomials $ax+b$. Define $\rho_t f(x) = f(x+t)$. The constants form a non-trivial subspace with no invariant complement.
\end*{}

\section{Irreducible representations}
Let $\rho : G \rightarrow GL(V)$ be a linear representation of $G$. We say that it is \textit{irreducible} if $V$ is not $0$ and if no vector subspace of $V$ is $G$-stable, except of course $0$ and $V$. By theorem 1.6, the second part of the condition is identical to saying $V$ \textit{is not the direct sum of two representations} (excluding the trivial decomposition $V = 0 \oplus V$). As we mentioned above (section 1.2), we can think of irreducible representations as building blocks for all other representations as any representation can be decomposed into irreducible representations. The latter can be summarised by the following theorem:

\begin{theorem} \textbf{(Maschke's Theorem)} Given a finite group $G$, every representation of $G$ on a non-zero, finite-dimensional complex vector space is a direct sum of irreducible representations.
\end{theorem}

\begin{proof}
Let $V$ be a linear representation of $G$. We proceed by induction on $\dim(V)$. If $\dim(V)=0$, the theorem is trivially satisfied in the sense that $0$ is the direct sum of the empty family of irreducible representations. Now suppose $\dim(V) \geq 1$. If $V$ is irreducible then we are done. Now if $V$ can be decomposed into a direct sum say, $V_1 \oplus V_2$ with $\dim(V_1) < \dim(V)$ and $\dim(V_2) < \dim(V)$. By the induction hypothesis $V_1$ and $V_2$ are direct sums of irreducible representations, and so the same is true of $V$.
\end{proof}

\section{Tensor product of two representations}

We discussed the direct sum of two vector spaces earlier(which has the properties of "addition"), now there is also the \textit{tensor product} which has the properties of "multiplication". Let $V_1$ and $V_2$ be two vector spaces. The tensor product  $\otimes: V_1 \cross V_2 \rightarrow V_1 \otimes V_2$ constructs a new vector space $V_1 \otimes V_2$ with the map $(x_1,x_2) \rightarrow x_1 \cdot x_2$ which can be defined as the set of formal linear combinations $x_1 \otimes x_2$ subject to the conditions:

\begin{enumerate}[label=(\roman*)]
\item $x_1 \cdot x_2$ is linear in each of the variables $x_1$ and $x_2$.
\item If $\lbrace e_{i_1} \rbrace$ and $\lbrace e_{i_2} \rbrace$ is a basis for $V_1$ and $V_2$ respectively then $e_{i_1} \otimes e_{i_2}$ is a basis for $V_1 \otimes V_2$.
\end{enumerate} 

We can show that this space exits and is unique. By condition ii we have:
\[
\dim(V_1 \otimes V_2) = \dim(V_1)\cdot \dim(V_2)
\]

Now let $\rho^1\colon G  \rightarrow GL(V_1)$ and $\rho^2\colon G  \rightarrow GL(V_2)$ be two linear representations of a group $G$. For $s \in G$, define an element $\rho_s$ of $GL(V_1 \otimes V_2)$ by the condition:
\[
\rho_s(x_1 \cdot x_2) = \rho_s^1(x_1)\cdot \rho_s^2(x_2) \quad \text{for } x_1 \in V, x_2 \in V_2.
\]

We write:
\[
\rho_s = \rho_s^1 \otimes \rho_s^2
\]

The $\rho_s$ above defines a linear representation of $G$ in $V_1 \otimes V_2$ which is called the \textit{tensor product} of the given representations. 

Defining the above using matrix notation; let $(e_{i_1})$ be a basis for $V_1$ and let $r_{i_1j_1}(s)$ be the matrix of $\rho_s^1$ with respect to this basis, define $(e_{i_2})$ and $r_{i_2j_2}(s)$ in a same manner. Now we have:
\[
\rho_s^1(e_{i_1}) = \sum_{i_1} r_{i_1j_1}(s) \cdot e_{i_1}, \qquad \rho_s^2(e_{i_2}) = \sum_{i_2} r_{i_2j_2}(s) \cdot e_{i_2}
\]

Implies:
\[
\rho_s(e_{i_1}\cdot e_{i_2}) = \sum_{i_1,i_2} r_{i_1j_1}(s) \cdot r_{i_2j_2}(s)\cdot e_{i_1} \cdot  e_{i_2}
\]

Accordingly the matrix of $\rho_s$ is $(r_{i_1j_1}(s) \cdot r_{i_2j_2}(s))$ which is the tensor product of the matrices $\rho_s^1$ and $\rho_s^2$.

We must add, the tensor product of two irreducible representations is not in general irreducible. It decomposes into a direct sum of irreducible representations which can be determined by means of character theory, which we shall discuss in the next chapter.

\chapter{Character theory}

\section{The character of a representation}

\begin{definition}  
Let $V$ be a vector space having a basis $\lbrace e_i \rbrace_{i=1}^n$ and define the map $a:V \rightarrow V$ with matrix $a_{ij}$. By the trace of $a$ we mean the scalar:

\[
Tr(a) = \sum_i a_{ii}
\]
\end{definition}    

The trace of a matrix is the sum of the eigenvalues and is invariant with respect
to a change in the basis. The trace has such importance in representation theory, that is, it has its own as the \textit{character} of the representation which yields the following definition. 

\begin{definition}
Let $\rho: G \rightarrow GL(V)$ be a linear representation of a finite group $G$ in the vector space $V$. For each $s \in G$ define:
\[
\chi_\rho(s) = Tr(\rho_s)
\]
The complex valued function $\chi_\rho$ on $G$ thus obtained is called the \textit{character} of the representation $\rho$; this function is of great significance as it \textit{characterizes} the representation.
\end{definition}    

We can discuss the character function further and say, it takes the same value for all elements of a \textit{conjugacy class} in the group.

\begin{definition}
For each $s \in G$, its conjugacy class is the set of all elements $s' \in G$ such that there exists a $t \in G$ such that 
\[
s' = t^{-1}st
\]
\end{definition}
So the character $\chi$ is a class function, in other words, its domain is the set of conjugacy classes of $G$. More rigorously, if $s$ and $s'$ are conjugate, then
\[
\chi(s) = \chi(s')
\]

\begin{proposition} 
If $\chi$ is the character of a representation $\rho$ of degree $n$, then
\begin{enumerate}[label=(\roman*)]
\item $\chi(1) = n$ 
\item $\chi(s^{-1}) = \chi(s)^*\quad for s \in G$ \qquad (where $*$ denotes complex conjugation)
\item $\chi(tst^{-1})=\chi(s) \quad for s,t \in G$
\end{enumerate}

\end{proposition}

\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item We have $\rho(1) = 1$, and $Tr(1)=n$ since $V$ has dimension $n$. 
\item $\rho(s^\alpha)=I$ for $\alpha$ large enough. It follows that the eigenvalues $\lambda_i$ of $\rho_s$ are roots of unity. Now with $*$ complex conjugation we have:
\[
\chi(s)^* = Tr(\rho_s)^* = \sum \lambda_i^* = \sum \lambda^{-1} = Tr(\rho_s^{-1}) = Tr(\rho_{s^{-1}}) = \chi(s^{-1})
\]
\item Using the widely known formula, $Tr(ab)=Tr(ba)$ the result follows.
\end{enumerate} 
\end{proof}

\begin{proposition}
Let $\rho^1\colon G  \rightarrow GL(V_1)$ and $\rho^2\colon G  \rightarrow GL(V_2)$ be two linear representations of a group $G$, and let $\chi_1$ and $\chi_2$ be their characters. Then:
\begin{enumerate}[label=(\roman*)]
\item The character $\chi$ of the direct sum representation $V_1 \oplus V_2$ is $\chi_1 + \chi_2$
\item The character $\eta$ of the tensor product representation $V_1 \otimes V_2$ is $\chi_1 \cdot \chi_2$
\end{enumerate}
\end{proposition}

\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item Assumes we are given $\rho_1$ and $\rho_2$ in matrix form, $R_s^1$ and $R_s^2$ respectively. The representation $V_1 \oplus V_2$ is given by:

\[ R_s  = 
\begin{bmatrix} 
        R_s^1 & 0 \\ 0 & R_2^2 
\end{bmatrix}
\]

Now $Tr(R_s) = Tr(R_s^1)+Tr(R_s^2)$ which is $\chi(s) = \chi_1(s) + \chi_2(s)$.
\item Now we have the following:
\[
\chi_1(s) = \sum_{i_1} r_{i_1i_1}(s), \qquad \chi_2(s) = \sum_{i_2} r_{i_2i_2}(s)
\]
So
\[
\eta(s) = \sum_{i_1,i_2} r_{i_1i_1}(s) \cdot r_{i_2i_2}(s) = \chi_1(s) \cdot \chi_2(s)
\]
\end{enumerate} 
\end{proof}

\section{Schur's lemma and its applications}

\begin{theorem}(\textbf{Schur's lemma}) Let $\rho^1\colon G  \rightarrow GL(V_1)$ and $\rho^2\colon G  \rightarrow GL(V_2)$ be two irreducible representations of a group $G$, and let $f$ be a linear mapping of $V_1$ into $V_2$ such that $\rho_s^2 \circ f = f \circ \rho_s^1$ for all $s \in G$. Then:
\begin{enumerate}[label=(\roman*)]
\item If $\rho^1$ and $\rho^2$ are not isomorphic, we have $f=0$
\item If $V_1 = V_2$ and $\rho^1 = \rho^2$, $f$ is scalar multiple of the identity.
\end{enumerate}
\end{theorem}

\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item Let $W_1$ and $W_2$ be the kernel and image of $f$. Note that the kernel and image of $f$ are both invariant subspaces. For the kernel with $x \in W_1$, if $f(x)=0$, then $f\rho_s^1(x) = \rho_s^2f(x)=0$ so $\rho_s^1(x) \in W_1$. By irreducibility, $W_1$ is trivial or the whole space. We neglect the first case as it implies $f=0$ which is trivial. By virtue of the same argument presented above we can conclude the image $W_2$ is equal to $V_2$. Since by assumption $f \neq 0$ we have $W_1 = 0$ and $W_2 = V_2$ which shows $f$ is an isomorphism.
\item $f$ has a non-zero eigenvalue(there exits at least one, since the field of scalars is the field of complex numbers). The map $\hat{f} = f - \lambda I$ satisfies $\rho_s^2 \circ \hat{f} = \hat{f} \circ \rho_s^1$ and has a non-trivial kernel so $\hat{f} = 0$.
\end{enumerate}
\end{proof}






\end{document}

我之前问过这个问题,但没有得到明确的解决方案。从命题 2.1.4 开始,证明结束符号出现在下一行,但我希望它出现在最后一个单词的末尾,如文档中前面所示。

答案1

您无需手动进行定义,而是可以使用amsthm包。它已经定义了一个proof环境,可以对其进行轻微调整以提供与示例中所示的布局相同的布局。

这是文档甚至还有一个例子,说明如何在“证明”之后换行:

\documentclass{report}

\usepackage{amssymb}
\usepackage{amsthm}

\makeatletter
  \renewenvironment{proof}[1][\normalfont\bfseries\proofname]{\par
    \pushQED{\qed}%
    \normalfont \topsep6\p@\@plus6\p@\relax
    \trivlist
    \item\relax
          {\itshape
      #1\@addpunct{.}}\hspace\labelsep\ignorespaces
  }{%
    \popQED\endtrivlist\@endpefalse
  }
  \makeatother


  \renewcommand{\qedsymbol}{$\blacksquare$}  

\begin{document}

\begin{proof}
\begin{enumerate}
\item Let $W_1$ and $W_2$ be the kernel and image of $f$. Note that the kernel and image of $f$ are both invariant subspaces. For the kernel with $x \in W_1$, if $f(x)=0$, then $f\rho_s^1(x) = \rho_s^2f(x)=0$ so $\rho_s^1(x) \in W_1$. By irreducibility, $W_1$ is trivial or the whole space. We neglect the first case as it implies $f=0$ which is trivial. By virtue of the same argument presented above we can conclude the image $W_2$ is equal to $V_2$. Since by assumption $f \neq 0$ we have $W_1 = 0$ and $W_2 = V_2$ which shows $f$ is an isomorphism.
\item $f$ has a non-zero eigenvalue(there exits at least one, since the field of scalars is the field of complex numbers). The map $\hat{f} = f - \lambda I$ satisfies $\rho_s^2 \circ \hat{f} = \hat{f} \circ \rho_s^1$ and has a non-trivial kernel so $\hat{f} = 0$.\qedhere
\end{enumerate}
\end{proof}

\end{document}

在此处输入图片描述

相关内容