Inverse Laplace Transforms
Inverse Laplace Transforms
Intro to Laplace »
Inverse Laplace »
Heaviside Functions »
Correlation »
Laplace Transforms
Laplace Property Demonstrators
1. First Shifting Theorem: ${L}{e^{at}f(t)}$
Result will appear here.
2. Second Shifting Theorem: $mathcal{L}{f(t-c)u(t-c)}$
Result will appear here.
3. Initial & Final Value Theorems (Structural Demo)
Result will appear here.
Inverse Laplace Transforms
1. Poles and Zeros of $F(s)$
Poles and Zeros will appear here.
2. Inverse Laplace Transform of Simple Functions
Inverse transform will appear here.
The Inverse Laplace Transform, denoted by $mathcal{L}^{-1}{F(s)}$, converts a function of the complex variable $s$ back to a function of the real variable $t$. It is used to find the time-domain solution of systems whose behavior is described by Laplace Transforms, especially for solving differential equations.
Unlike the forward transform, the inverse transform is often found by recognizing common patterns in $F(s)$ and using a table of Laplace Transforms in reverse, or by decomposing complex $F(s)$ functions into simpler forms (e.g., using partial fraction decomposition) whose inverse transforms are known.
3. Inverse Transforms using Partial Fraction Decomposition
For more complex $F(s)$ expressions that are rational functions (polynomial over polynomial), partial fraction decomposition is often required. This method breaks down a complex fraction into a sum of simpler fractions, each of which can be inversely transformed using standard table lookups.
General Steps for Partial Fraction Decomposition:
- Check for Improper Fractions: If the degree of the numerator polynomial is greater than or equal to the degree of the denominator polynomial, perform polynomial long division first to get a polynomial and a proper fraction.
- Factor the Denominator: Factor the denominator $D(s)$ completely into linear factors $(s-a)$ and irreducible quadratic factors $(s^2+bs+c)$.
- Set up the Partial Fraction Form:
- For each distinct linear factor $(s-a)$, include a term $frac{A}{s-a}$.
- For each repeated linear factor $(s-a)^n$, include terms $frac{A_1}{s-a} + frac{A_2}{(s-a)^2} + dots + frac{A_n}{(s-a)^n}$.
- For each distinct irreducible quadratic factor $(s^2+bs+c)$, include a term $frac{Bs+C}{s^2+bs+c}$.
- For each repeated irreducible quadratic factor $(s^2+bs+c)^n$, include terms $frac{B_1s+C_1}{s^2+bs+c} + frac{B_2s+C_2}{(s^2+bs+c)^2} + dots + frac{B_ns+C_n}{(s^2+bs+c)^n}$.
- Solve for Coefficients: Multiply both sides of the equation by the original denominator $D(s)$ to clear fractions. Then, solve for the unknown constants (A, B, C, etc.) by:
- Substituting the roots of the linear factors into the equation.
- Equating coefficients of like powers of $s$.
- Substituting convenient values of $s$.
- Inverse Transform Each Term: Once the partial fractions are found, apply the inverse Laplace Transform to each simple term using the standard table.
Example: For $F(s) = frac{3s+7}{s^2+4s+3}$
1. Factor denominator: $s^2+4s+3 = (s+1)(s+3)$
2. Set up partial fractions: $$ frac{3s+7}{(s+1)(s+3)} = frac{A}{s+1} + frac{B}{s+3} $$
3. Solve for A and B:
Multiply by $(s+1)(s+3)$: $$ 3s+7 = A(s+3) + B(s+1) $$
Set $s=-1$: $3(-1)+7 = A(-1+3) + B(-1+1) implies 4 = 2A implies A = 2$
Set $s=-3$: $3(-3)+7 = A(-3+3) + B(-3+1) implies -2 = -2B implies B = 1$
So, $$ F(s) = frac{2}{s+1} + frac{1}{s+3} $$
4. Inverse Transform each term:
$$ mathcal{L}^{-1}left{frac{2}{s+1}right} = 2e^{-t} $$
$$ mathcal{L}^{-1}left{frac{1}{s+3}right} = e^{-3t} $$
Final answer: $$ f(t) = 2e^{-t} + e^{-3t} $$
Note: While this section provides a detailed explanation of partial fraction decomposition, fully automated symbolic decomposition is beyond the current scope of this specific calculator. You can use the "Simple Function Inverses" calculator above for each individual term after you've performed the partial fraction decomposition manually.
Solving Simultaneous Differential Equations (Laplace)
3. Solver: Algebraic Solution of System in s-Domain (2x2)
This solver assists with Step 2 of the procedure: solving the algebraic system after Laplace transformation. You must provide the equations already transformed into the $s$-domain.
Consider a system of equations in the form:
Equation 1: $A_{11}X(s) + A_{12}Y(s) = B_1(s)$
Equation 2: $A_{21}X(s) + A_{22}Y(s) = B_2(s)$
Results for $X(s)$ and $Y(s)$ will appear here.
Next Step: After finding $X(s)$ and $Y(s)$, use the "Simple Inverse Transforms" section (and "Partial Fractions" guidance) to find the time-domain solutions $x(t)$ and $y(t)$.
1. Introduction
Laplace Transforms provide an exceptionally powerful and systematic method for solving systems of linear ordinary differential equations, especially when initial conditions are involved. The key advantage is that the Laplace transform converts differential equations in the time domain into algebraic equations in the s-domain. This transforms a calculus problem into an algebra problem, which is typically much easier to solve.
Once the algebraic system is solved for the transformed variables, the inverse Laplace transform is applied to convert the solutions back into the time domain, yielding the specific solutions to the original differential equations.
2. Procedure for Solving Simultaneous DEs with Laplace Transforms
- Step 1: Take the Laplace Transform of Each Equation.
Apply the Laplace Transform to every term in each differential equation in the system. Use the linearity property and the transforms of derivatives:
- $mathcal{L}{x'(t)} = sX(s) - x(0)$
- $mathcal{L}{x''(t)} = s^2X(s) - sx(0) - x'(0)$
- ... and similarly for other variables like $y(t)$ (i.e., $mathcal{L}{y(t)} = Y(s)$).
Substitute all given initial conditions at this step. This will result in a system of algebraic equations involving $X(s)$, $Y(s)$, etc.
- Step 2: Solve the Algebraic System for the Transformed Variables.
You now have a system of linear algebraic equations (e.g., in terms of $X(s)$ and $Y(s)$). Use standard algebraic techniques (e.g., substitution, elimination, Cramer's rule, matrix methods) to solve for each transformed variable individually (e.g., express $X(s)$ and $Y(s)$ as rational functions of $s$).
- Step 3: Perform the Inverse Laplace Transform.
Once you have each transformed solution (e.g., $X(s)$ and $Y(s)$) in its simplest rational form, apply the inverse Laplace Transform to each one. This often requires:
- Partial fraction decomposition to break down complex rational functions into simpler terms.
- Recognizing standard inverse Laplace transforms from the common table.
- Applying inverse shifting theorems if exponential terms are present.
This final step yields the time-domain solutions $x(t)$, $y(t)$, etc., to the original system of differential equations.
Introduction to Laplace Transforms
The Laplace Transform is an integral transform that converts a function of a real variable $t$ (often time) to a function of a complex variable $s$ (complex frequency). It is a powerful tool for solving linear ordinary and partial differential equations, particularly in engineering and physics.
Understanding the properties of Laplace Transforms is crucial for effectively applying them to solve problems.
Table of Common Laplace Transforms
$f(t)$ | $F(s) = mathcal{L}{f(t)}$ | Conditions |
---|---|---|
$1$ or $u(t)$ | $frac{1}{s}$ | $s > 0$ |
$t$ | $frac{1}{s^2}$ | $s > 0$ |
$t^n$ ($n=$ positive integer) | $frac{n!}{s^{n+1}}$ | $s > 0$ |
$t^a$ ($a > -1$, real) | $frac{Gamma(a+1)}{s^{a+1}}$ | $s > 0$ |
$e^{at}$ | $frac{1}{s-a}$ | $s > a$ |
$sin(at)$ | $frac{a}{s^2+a^2}$ | $s > 0$ |
$cos(at)$ | $frac{s}{s^2+a^2}$ | $s > 0$ |
$t e^{at}$ | $frac{1}{(s-a)^2}$ | $s > a$ |
$t^n e^{at}$ | $frac{n!}{(s-a)^{n+1}}$ | $s > a$ |
$e^{at}sin(bt)$ | $frac{b}{(s-a)^2+b^2}$ | $s > a$ |
$e^{at}cos(bt)$ | $frac{s-a}{(s-a)^2+b^2}$ | $s > a$ |
$sinh(at)$ | $frac{a}{s^2-a^2}$ | $s > |a|$ |
$cosh(at)$ | $frac{s}{s^2-a^2}$ | $s > |a|$ |
$delta(t)$ (Dirac delta) | $1$ | |
$u(t-c)$ (Heaviside step) | $frac{e^{-cs}}{s}$ | $c ge 0, s > 0$ |
First Shifting Theorem (Frequency Shifting)
The First Shifting Theorem states that if $mathcal{L}{f(t)} = F(s)$, then the Laplace Transform of $e^{at}f(t)$ is $F(s-a)$.
Derivation:
By definition of the Laplace Transform:
$$ mathcal{L}{e^{at}f(t)} = int_0^infty e^{-st} [e^{at}f(t)] ,dt $$
Using the property of exponents $e^x e^y = e^{x+y}$, we combine the exponential terms:
$$ = int_0^infty e^{-st+at} f(t) ,dt = int_0^infty e^{-(s-a)t} f(t) ,dt $$
Let $s' = s-a$. Then the integral becomes:
$$ = int_0^infty e^{-s't} f(t) ,dt $$
This is, by definition, the Laplace Transform of $f(t)$ with $s$ replaced by $s'$, i.e., $F(s')$. Substituting back $s' = s-a$:
$$ mathcal{L}{e^{at}f(t)} = F(s-a) $$
This property is very useful for finding transforms of functions that are products of an exponential term and another function whose transform is known.
Laplace Transforms of Derivatives
This property relates the Laplace Transform of the derivative of a function to the transform of the function itself and its initial values. It is key to solving differential equations.
Transform of the First Derivative, $f'(t)$:
Derivation (using integration by parts $int u dv = uv - int v du$):
Let $u = e^{-st}$ and $dv = f'(t)dt$. Then $du = -se^{-st}dt$ and $v = f(t)$.
$$ mathcal{L}{f'(t)} = int_0^infty e^{-st} f'(t) ,dt = left[e^{-st}f(t)right]_0^infty - int_0^infty f(t)(-se^{-st}) ,dt $$
Evaluating the first term:
$$ left[e^{-st}f(t)right]_0^infty = lim_{ttoinfty} (e^{-st}f(t)) - e^{-s cdot 0}f(0) $$
Assuming $f(t)$ is of exponential order such that $lim_{ttoinfty} e^{-st}f(t) = 0$ for sufficiently large Re(s):
$$ = (0 - 1 cdot f(0)) = -f(0) $$
The second term is:
$$ - int_0^infty f(t)(-se^{-st}) ,dt = s int_0^infty e^{-st}f(t) ,dt = sF(s) $$
Combining these results:
$$ mathcal{L}{f'(t)} = -f(0) + sF(s) $$
Transform of the Second Derivative, $f''(t)$:
Derivation (applying the first derivative rule twice):
Let $g(t) = f'(t)$. Then $g'(t) = f''(t)$.
Using the rule for the first derivative on $g(t)$:
$$ mathcal{L}{f''(t)} = mathcal{L}{g'(t)} = smathcal{L}{g(t)} - g(0) $$
Since $g(t) = f'(t)$ and $g(0) = f'(0)$:
$$ = smathcal{L}{f'(t)} - f'(0) $$
Now, substitute the known transform for $mathcal{L}{f'(t)} = sF(s) - f(0)$:
$$ = s[sF(s) - f(0)] - f'(0) $$
$$ = s^2F(s) - sf(0) - f'(0) $$
This pattern can be generalized for higher-order derivatives.
Heaviside Unit Step Function & Second Shifting Theorem
Heaviside Unit Step Function, $u(t-c)$
The Heaviside Unit Step Function, often denoted as $u(t)$, $H(t)$, or $theta(t)$, is a discontinuous function whose value is zero for negative arguments and one for positive arguments.
Where $c ge 0$ is the time at which the step occurs. It is fundamental for representing signals that are "switched on" at a specific time, and for defining piecewise functions.
Laplace Transform of the Heaviside Function:
Derivation:
By definition of the Laplace Transform:
$$ mathcal{L}{u(t-c)} = int_0^infty e^{-st} u(t-c) ,dt $$
Since $u(t-c) = 0$ for $t < c$ and $u(t-c) = 1$ for $t ge c$, the integral limits change:
$$ = int_c^infty e^{-st} cdot 1 ,dt $$
Integrating $e^{-st}$ with respect to $t$:
$$ = left[-frac{1}{s}e^{-st}right]_c^infty $$
$$ = lim_{Ttoinfty} left(-frac{1}{s}e^{-sT}right) - left(-frac{1}{s}e^{-sc}right) $$
For $text{Re}(s) > 0$, $lim_{Ttoinfty} e^{-sT} = 0$. Therefore:
$$ = 0 - left(-frac{1}{s}e^{-sc}right) = frac{e^{-cs}}{s} $$
Second Shifting Theorem (Time Shifting)
The Second Shifting Theorem, also known as the Time-Shifting Theorem, states that if $mathcal{L}{f(t)} = F(s)$, then the Laplace Transform of $f(t-c)u(t-c)$ is $e^{-cs}F(s)$.
Derivation:
By definition of the Laplace Transform:
$$ mathcal{L}{f(t-c)u(t-c)} = int_0^infty e^{-st} f(t-c)u(t-c) ,dt $$
Since $u(t-c)=0$ for $t < c$ and $u(t-c)=1$ for $t ge c$, the integral becomes:
$$ = int_c^infty e^{-st} f(t-c) ,dt $$
Let $v = t-c$. Then $t = v+c$, and $dt = dv$. When $t=c$, $v=0$. When $t=infty$, $v=infty$.
Substituting these into the integral:
$$ = int_0^infty e^{-s(v+c)} f(v) ,dv $$
Using the property of exponents $e^{x+y} = e^x e^y$:
$$ = int_0^infty e^{-sv} e^{-sc} f(v) ,dv $$
Since $e^{-sc}$ is a constant with respect to $v$, we can pull it out of the integral:
$$ = e^{-sc} int_0^infty e^{-sv} f(v) ,dv $$
The integral is the definition of $mathcal{L}{f(v)}$, which is $F(s)$. Thus:
$$ mathcal{L}{f(t-c)u(t-c)} = e^{-cs}F(s) $$
This theorem is crucial for analyzing systems with delayed inputs or for converting piecewise functions into the s-domain.
Initial and Final Value Theorems
Initial Value Theorem (IVT)
The Initial Value Theorem allows us to find the initial value of a function $f(t)$ (i.e., $f(0^+) = lim_{t to 0^+} f(t)$) directly from its Laplace Transform $F(s)$, without needing to find the inverse transform.
Conditions: This theorem applies if $f(t)$ and $f'(t)$ are Laplace transformable, and the limit $lim_{s to infty} sF(s)$ exists.
It is useful for checking the initial conditions in the solution of differential equations or analyzing system behavior at $t=0$. The theorem is derived from the Laplace transform of a derivative: $mathcal{L}{f'(t)} = int_0^infty e^{-st}f'(t)dt = sF(s) - f(0^+)$. As $s to infty$, the integral $int_0^infty e^{-st}f'(t)dt to 0$ (if $f'(t)$ is bounded or of exponential order). Thus, $0 approx lim_{s to infty} sF(s) - f(0^+)$, leading to the theorem.
Final Value Theorem (FVT)
The Final Value Theorem allows us to find the final (steady-state) value of a function $f(t)$ (i.e., $lim_{t to infty} f(t)$) directly from its Laplace Transform $F(s)$.
Conditions for Applicability: This theorem is only valid if the limit on the right exists AND all poles of $sF(s)$ are in the left-half of the s-plane (i.e., have negative real parts). If $sF(s)$ has any poles on the imaginary axis (except possibly a single pole at the origin $s=0$) or in the right-half plane, the FVT does not apply, and $f(t)$ may not settle to a finite steady-state value.
It is useful for determining the steady-state response of systems. The derivation also starts from $mathcal{L}{f'(t)}$. As $s to 0$, $int_0^infty e^{-st}f'(t)dt to int_0^infty f'(t)dt = [f(t)]_0^infty = lim_{t to infty}f(t) - f(0^+)$. Thus, $lim_{t to infty}f(t) - f(0^+) = lim_{s to 0} [sF(s) - f(0^+)]$, which simplifies to the theorem.