Determinants And Matrices
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one).The determinant of a matrix A is denoted det(A), det A, or A.
Determinants and Matrices
Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a matrix, whose roots are the eigenvalues. In geometry, the signed n-dimensional volume of a n-dimensional parallelepiped is expressed by a determinant. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.
The determinant has several key properties that can be proved by direct evaluation of the definition for 2 2 \displaystyle 2\times 2 -matrices, and that continue to hold for determinants of larger matrices. They are as follows:[1] first, the determinant of the identity matrix ( 1 0 0 1 ) \displaystyle \beginpmatrix1&0\\0&1\endpmatrix is 1.Second, the determinant is zero if two rows are the same:
The entries a 1 , 1 \displaystyle a_1,1 etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are in a commutative ring.
There are various equivalent ways to define the determinant of a square matrix A, i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.
To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 9) or else 1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.[citation needed]
These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, Gaussian elimination can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix A \displaystyle A using that method:
The determinant is a multiplicative map, i.e., for square matrices A \displaystyle A and B \displaystyle B of equal size, the determinant of a matrix product equals the product of their determinants:
Unwinding the determinants of these 2 2 \displaystyle 2\times 2 -matrices gives back the Leibniz formula mentioned above. Similarly, the Laplace expansion along the j \displaystyle j -th column is the equality
Here, t \displaystyle t is the indeterminate of the polynomial and I \displaystyle I is the identity matrix of the same size as A \displaystyle A . By means of this polynomial, determinants can be used to find the eigenvalues of the matrix A \displaystyle A : they are precisely the roots of this polynomial, i.e., those complex numbers λ \displaystyle \lambda such that
Historically, determinants were used long before matrices: A determinant was originally defined as a property of a system of linear equations.The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero).In this sense, determinants were first used in the Chinese mathematics textbook The Nine Chapters on the Mathematical Art (九章算術, Chinese scholars, around the 3rd century BCE). In Europe, solutions of linear systems of two equations were expressed by Cardano in 1545 by a determinant-like entity.[23]
Determinants proper originated from the work of Seki Takakazu in 1683 in Japan and parallelly of Leibniz in 1693.[24][25][26][27] Cramer (1750) stated, without proof, Cramer's rule.[28] Both Cramer and also Bezout (1779) were led to determinants by the question of plane curves passing through a given set of points.[29]
Vandermonde (1771) first recognized determinants as independent functions.[25] Laplace (1772) gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case.[30] Immediately following, Lagrange (1773) treated determinants of the second and third order and applied it to questions of elimination theory; he proved many special cases of general identities.
Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word "determinant" (Laplace had used "resultant"), though not in the present signification, but rather as applied to the discriminant of a quantic.[31] Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.
The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.
between the general linear group (the group of invertible n n \displaystyle n\times n -matrices with entries in R \displaystyle R ) and the multiplicative group of units in R \displaystyle R . Since it respects the multiplication in both groups, this map is a group homomorphism.
For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over directly. For example, in the Leibniz formula, an infinite sum (all of whose terms are infinite products) would have to be calculated. Functional analysis provides different extensions of the determinant for such infinite-dimensional situations, which however only work for particular kinds of operators.
For square matrices with entries in a non-commutative ring, there are various difficulties in defining determinants analogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order for the product is specified, and similarly for other definitions of the determinant, but non-commutativity then leads to the loss of many fundamental properties of the determinant, such as the multiplicative property or that the determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is no reasonable notion of a multilinear form (existence of a nonzero bilinear form[clarify] with a regular element of R as value on some pair of arguments implies that R is commutative). Nevertheless, various notions of non-commutative determinant have been formulated that preserve some of the properties of determinants, notably quasideterminants and the Dieudonné determinant. For some classes of matrices with non-commutative elements, one can define the determinant and prove linear algebra theorems that are very similar to their commutative analogs. Examples include the q-determinant on quantum groups, the Capelli determinant on Capelli matrices, and the Berezinian on supermatrices (i.e., matrices whose entries are elements of Z 2 \displaystyle \mathbb Z _2 -graded rings).[50] Manin matrices form the class closest to matrices with commutative elements.
Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in numerical linear algebra, where for applications such as checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques.[51] Computational geometry, however, does frequently use calculations related to determinants.[52]
While the determinant can be computed directly using the Leibniz rule this approach is extremely inefficient for large matrices, since that formula requires calculating n ! \displaystyle n! ( n \displaystyle n factorial) products for an n n \displaystyle n\times n -matrix. Thus, the number of required operations grows very quickly: it is of order n ! \displaystyle n! . The Laplace expansion is similarly inefficient. Therefore, more involved techniques have been developed for calculating determinants.
Charles Dodgson (i.e. Lewis Carroll of Alice's Adventures in Wonderland fame) invented a method for computing determinants called Dodgson condensation. Unfortunately this interesting method does not always work in its original form.[59] 041b061a72