# Amar G. Bose: 6.312 Lecture 03

DR. AMAR G. BOSE: So I did manage to get in a little early and copy down what we had developed last time. I hope I've remembered to give the same numbers to these. We basically derived from Newton's Law this relationship. Which, if you look at it in differential form, is really F equals ma in differential form. This is mass density. Acceleration is the derivative of velocity, and this is pressure. It's really an F equals ma type of expression. This came from the Gas Law, relating pressure to change in volume, and this one came from the so-called Continuity Equation, which said this little particle we're watching, if this side moves faster than this side, it's expanding, and therefore, there's an increase in the volume.

Then we set about solving these, and I think we got through that for the one-dimensional case. Namely, we have the variables u, tau, and p. We want a wave equation in p. We've got to eliminate U and eliminate tau. Eliminating tau was easy. Just plug this expression into 2, 3 into 2, and you come out with partial with respect to t is equal to the right hand side here. Then we have to get rid of U, but U was a partial with respect to x here, and in number 1, the one equation that we haven't used is the partial with respect to t. But taking the partial and reversing the order is not a problem, as long as these mixed partials are continuous.

And so we said, all right. That's fine. We'll differentiate the first one, we'll take the partial with respect to x, and we'll get a mixed partial here, and we'll take the partial of this fellow with respect to t, and we'll get a mixed partial, and then we can equate the two. Taking the partial of this with respect to t, we got this, mixed. And then from one, taking the partial with respect to x, we got this, we can equate this, because the order doesn't matter. Equate this to this. And so if, in fact, we simply take this expression and plug this value in, we come out immediately with this, which is the one-dimensional wave equation.

Now, we mentioned that we're not going to really use vector equations throughout the subject. But I wanted to open the door for this, because in other disciplines that we may not even know about, you may come up with situations where you have to go to oddball coordinate systems, or you want to derive general equations in terms of systems that are independent of coordinates. And the only reason to go to the vector equation is once you have it, you have it in all coordinate systems because they're tabulated.

So I believe we had gone so far as to derive this relationship from the one-dimensional picture. You see, the only thing I really want to get across here is that the one-dimensional is the easy one. And it is a very easy step to go from one-dimensional, just looking at what your expressions are saying in the general sense, to the vector. And then you have enormous power. And it's much easier to make the step from one-dimensional to the vector than it is from one-dimensional rectangular to one-dimensional polar, even.

So we developed these three equations, and you can see the parallelism here. This partial of p with respect to x goes into gradient p, and of course gradient p in one dimension is partial of p with respect to x. That's the ultimate check. This fellow was independent of coordinate systems, one that came from the gas law, so he's the same. And this one here, partial of u with respect to x, went to-- now it's a vector over here. So it went to divergence, and divergence of a vector, one dimension, is just the partial of u with respect to x, the check again.

So all that remains is to do the steps of deriving the wave equation in vector form, the same way we did this. And again, you can parallel the whole thing. We used here-- we wanted to get rid of tau, and so we plugged this fellow into there. Let's do exactly the same thing. We want to get rid of tau, so we'll take this 3A and put it into 2. We might as well use this.

So let's see. 3A into 2 would leave, on the left side, partial of p with respect to t-- I'm starting with 2-- is equal to minus gamma p0 over v0. Plug this in here, and I get a v0, which will cancel that. And we get gamma p0 with a minus sign-- yes, we have gamma p0 divergence of u.

Then we look at this again, say, gee, let's see if that checks. When we did this over here, we got partial of p with respect to t. That's OK. And we got over here-- partial of u with respect to x. Partial of u with respect to x is exactly divergence of u when u is one dimension. So it looks OK. And we'll call that 4A.

Next step we did over here was to realize that we have over here a partial with respect to u, and we have, over there on the top, the one equation we hadn't used-- the partial of u with respect to t. So we want to get them out. We want to get u out. So mixed partials can be-- the order doesn't matter. So we said, OK. We'll differentiate this with respect to t, and we'll get a mixed partial here. And let's just do that over here. Second partial of p with respect to t is equal to minus gamma p0. Divergence, now, it's really partial of the divergence of u with respect-- well, I'll write it that way. Partial with respect to t of divergence of u. And that we called equation 5. OK? We're just, again, going right down the line, and at each step, you can check.

Now, we have here a divergence of U in a partial with respect to t. We haven't used equation 1. So I have to get a divergence into equation 1 in order to be able to eliminate this term. And this, just for a second, I'll write out here, you know, is divergence partial with respect to t, changing the order of U. So I have partial with respect to t of U in number one. All I have to do now is take the divergence of both sides. So the divergence of the gradient of p is equal to-- from 1A up here-- minus row 0 divergence partial of u with respect to t. And we now have something that we can use here to get rid of u.

Now, divergence of the gradient. Anybody happen to remember what--? What's that called?

SPEAKER 1: [INAUDIBLE]

DR. AMAR G. BOSE: Laplacian. It has a symbol, del squared. OK. So del squared p, which is this part here. The Laplacian of p is equal to minus rho 0, and the divergence of this. The divergence of this, we get right from here. So it's going to be rho 0. Plug this in for this, and we get rho 0 over gamma p 0. Second partial of p with respect to t.

Now, Laplacian rectangular coordinates-- remember second partial of p with respect to x? By the way, is this a vector or is this a scalar? The del squared of this scalar? Scalar, sure. Because gradient of a scalar is a vector, divergence of a vector is a scalar, and it better well be, because the other side is clearly a scalar.

So the Laplacian would be the second partial with respect to x. And you notice a certain similarity over there. Second partial of p with respect to x, and you have exactly the same wave equation. So this is the vector wave equation. Any questions? Yeah?

SPEAKER 2: [INAUDIBLE]

DR. AMAR G. BOSE: Yeah. It should be. Because when I take-- where the heck is it? When I take this term and I plug in for it this term, I have a minus in that equation, and I have a minus in that equation, and that amounts to a plus, thank you very much.

So it's parallel all the way down. And I think, again, I'll just repeat it, that we can see that just doing the one-dimensional one, and step by step imitating it in the vector form, it is such an easy process. And you wind up with this. And the reason for winding up with this is that Laplacian is tabulated in all sorts of coordinate systems, so you can drop out the whole wave equation in any system you wish.

So now the rest of today, I'd like to talk about solutions to this. Now, all of the insight that we're going to build in the subject virtually comes from this one-dimensional wave equation. The most we'll do is spherical, but spherically symmetric, which really is a one-dimensional wave equation, which you can get anytime you wish, right from here. Maybe we might have an example or something in cylindrical. But the insight, just like the mathematics you can start from here, the insight that you can build starts from here. And then if you have more complicated situations, it's so easy to see what's going on in those complicated situations. Even, as we'll see later on, in a room like this, there are millions-- and you'll be able to calculate how many-- millions and millions of reflections that are reaching your ear from my voice. And it's very complicated wave pattern in the room.

But if you understand how waves go in a little room, namely a little tube, waves going back and forth like this, let's say with two borders on it, you will be able to visualize what's happening in a complex situation like a room. So I hope that-- I tried to warn you many times, don't get scared for the first couple of days. It looks like you're heading for something very complicated. Well, there aren't things are really very complicated. It's usually the math that complicates them. And if you get the physical concepts down, and then you realize the rest is bookkeeping, you can have insight to what you think now is very complicated situations.

OK. So we want to solve this wave equation. It's a second order partial differential equation. Now, in physics somewhere-- I don't know whether it was high school-- I don't know if you saw equations like that in high school. But somewhere in the first two years, in physics or math or something, you've seen equations like this, I hope. And you've probably seen-- this is the same kind of equation with different symbols you get for a wave on a string, for electromagnetic waves, for compressional waves in a metal bar. It comes again and again. The same kind of simple equation.

And the way that you've probably been led to a solution-- I don't know that, but-- or not led to a solution, given a solution, and given a function, and you're told, verify that this is a solution. And let's go through-- let's do it that way for a minute. You probably have been told-- I don't know what the symbol was, what discipline you were talking about. But that p which is a function of x and t-- the general solution of this is p of t plus and minus x over c. And then you're asked to verify this. This was donated to you somehow, and you were asked to verify. Let's just do it once. The exercise may be useful.

So that I don't have to carry around the baggage of a large number of pluses and minuses that proliferate here, I'll just take the one for a plus or a minus-- in this case, I erase the plus, so a minus. You can do it for a plus. Or with a little more sophistication, you can hold the plus and minus everywhere.

So let's just show that any function p of t plus and minus x over c is, in fact, a solution of that. Well, you're going to have to take derivatives of this function, partials with respect to x, partials with respect to t. So how do you do that? When you take the partial of this with respect to one of the variables in the argument, it's the total derivative with respect to the argument times the partial of the argument with respect to the particular variable. In other words, the partial of this thing with respect to the argument-- let me call that the partial of this p with respect to the derivative of p with respect to argument. I'll just call it the p prime.

So partial of p of x and t with respect to, let's see. What do I want over here? There's an x first, OK. Partial with respect to x is partial with respect to the argument, which is p prime, times partial of the argument with respect to the variable, and the variable is x. So that drops out a minus sign, and the 1/c.

Now if I took the second partial, again, I take the partial of this thing, partial of p-- second partial of p of x and t with respect to x is equal to-- again, it drops out. It's the partial of this thing with respect to the argument, which we'll call double prime. p double prime. And it's another minus sign, so that becomes a plus. And when you take the partial of this with respect to c, it drops out another c. So it's 1 over c squared.

OK. That's so much for the left hand side of this equation. All we want to do is show that that's a solution, so we plug it into the right hand side, and if it comes out to be the same thing, we're in. So let's do it. Partial of p with respect to t is p prime. p prime-- all right. Well, there's p prime, which is partial with respect to the argument, and then partial of the argument with respect to the variable, but that's just unity. Second partial of p with respect to t is just p double prime.

Now I let the cat out of the bag a little early. But I told you that you would probably be-- you have seen-- they'll say this is a solution. Let's see in that thing, by the way. If you remember anything about waves, wave equations, this is some constant. But anybody have any idea what the constant is? Velocity, you think so? OK.

Now, here is one of these steps in which it is absolutely not obvious, but not restrictive. If I-- I apologize for letting the cat in the bag a little early here. Just forget what's on the middle board. Here you are, and you want to solve this equation. You can go along as we're going to do. We're actually going to solve this. This was a little detour that I did, and I didn't realize what I was going to do when I did it. You're going to solve this equation. You could carry around all this baggage, all the way to the bitter end. Or you could say, well, I'll just give it a new name. I'll call it something.

Well, here's a step which you would only do if you either had solved a lot of equations of this type, or you-- no, no. You would only do it if you knew the answers. It's not all restrictive, but I'm going to just call with-- let's say very firmly-- with knowledge of the answer, but without restriction. Let rho 0 over gamma p0-- these rhos and the p's cause me a problem-- equal 1/c squared. I'm just letting it equal to a constant. So now, this equation would have been written partial squared, second partial of p with respect to x squared is equal to 1 over t squared, second partial of p with respect to t.

And that's the equation that you may have seen before, and people would tell you, ah, this is the solution, now verify it. Well, that's pretty easy. Because when you take the second partial of p with respect to x, you get p double prime over c squared. When you take the second partial of p with respect to t, you've got p double prime, and there's already a 1/c squared there. So in fact, this is a solution to this equation.

Now, that's just in case you've seen it before, and that's really all we have to do to establish that this function, which we'll talk a lot about, is a solution to the equation. But I'm going to do it in a way which looks more complicated but is very, very fundamental, and will be useful to us later on. So we're going to start over and do it.

OK. Second order, partial linear differential equation. Does anybody recall what the fundamental approach to solving these beasts is? There's one key in mathematics that you've probably had to develop for you. There's one key. Without it, life is miserable. With it, it becomes very simple. In general. I'm not talking about pressures or anything else. Second order-- I mean, it doesn't have to be second

Order. But Linear partial differential equations. They're partials because the function is a function of more than one variable. All right. Nobody-- separation of variables. That is exactly the right wording. What does it mean?

SPEAKER 3: [INAUDIBLE]

DR. AMAR G. BOSE: Yeah. The product solution. In other words, that's not at all obvious, and that's one of these beautiful results that mathematicians come up with, probably before there was any use for it, even. So you can assume a product solution, and there's no loss of generality in what you get. In other words, p of x of t-- I'll just say "let," but that's based on a theorem that's well-established. p of x and t equal to-- let's say p1 of x times p2 of t. And that makes life very nice, because watch-- well, let me tell you what will happen, and you can see it as it happens. When you take the product solution, you will reduce a partial differential equation, linear one, to two separate ordinary differential equations, which we know how to solve.

So p of x-- let's plug it in here. Second partial with respect to x. Well, the only thing that depends upon x is this one right here. So that becomes second partial-- yeah. The partial of this with respect to x is the derivative of this with respect to x, so it only depends on x. So the second derivative of p1 of x with respect to x times p2 of t is what the left-hand side of this equation becomes. Right-hand side becomes 1/c squared, because I've let all this be equal to 1/c squared times second partial with respect to t-- well, the only thing in p of x and t that depends on t is t2. So we have p1 of x, which is just like a constant when I'm taking the partial with respect to t, times the second derivative of p2 of t with respect to t.

And look-- if I were to take the x over here and the t over here, I would wind up with an equation with-- and this is really the heart of it-- I'd wind up with an equation in which one side depended only on one variable and the other side on the other variable. And if the variables are independent, how can that be? I've now proved 0 is equal to 1.

Let's do it. 1/p1 of x-- I'm taking this all over there. Second derivative of p1 of x with respect to x is equal to 1 over c squared. Take the t over here times 1/p2 of t, second derivative of p2, t with respect to t. This depends only on x. This depends only on t. x and t are independent. What do we do about that?

Well, it tells you something. It tells you that either both sides better be 0, or-- that doesn't seem to work very well. If you had any p of t in here, and you had some derivative of the function here, it's unlikely that that would be 0, except in the trivial situation that everything is 0, which we're not too interested in. So at best, this can equal a constant.

That's OK. If there's a solution with this equaling a constant, that's fine. Because if I change anything over here, and this is the same constant as the left-hand side, OK.

So now here's another one of these steps where we can just say, ah, it's equal to b. And then we carry b through, and then we'll find that b is something else, and whatnot, about to change. So it's easier to take-- again, knowing the solution. But no restriction. If you were doing it the first time, you just put down some constant here and go.

But we're not doing it the first time. And so with knowledge of the answer, but no restriction it all, I'm going to choose the constant to be minus some arbitrary constant over c squared. This is still a constant, because this can be any complex number. And this is a constant. So this whole thing is nothing more than a constant. But just so that I can make it without carrying a lot of things through and changing more symbols then I'm going to have to at the end, I'll just give it this. Which doesn't restrict this from being any complex number, which is exactly what we have here.

I don't think I need that. Let me see if I need any of this stuff. I think I'm really not sure, so I will start here. Oh, boy. It's going to make it interesting videotape. Yeah?

SPEAKER 4: [INAUDIBLE]

DR. AMAR G. BOSE: Oh. Because-- let's suppose this were any function of x. Some function of x over here, and some other function of t-- we know it's some function of t over here. Since I can choose x independently from t, I have a real problem with an equals sign between these two functions. OK?

Now, you can already see on the board that is partial by this-- it's not an assumption. It's been proven, of course, in math that we can do it. But by representing p as a product, we already have two ordinary second-order differential equations. That is equal to a constant is nothing but an ordinary. There's no partials in that. This equal to a constant is an ordinary differential equation. So let's write it out in a form that we're used to. We'll take the p of x across first. We get second derivative of p1 of x with respect to x. It's equal to minus k squared over c squared times the p1 of x. You're more used to that, I think we all are, in the form-- if I bring all this over and set it equal to 0. Homogeneous equation. Second derivative of p1 of x with respect to x plus-- take this over to the other side-- k squared over c squared, p1 of x is equal to 0.

And for the other equation, the right-hand side now, let me just do that over here for the right side. We have, what? We can do that all in one step. I took the left-hand side and just multiplied p1 of x times this to get this, and then I simplified it. I think, without making mistakes, we might be able to write this. Second order p2 of t, dt squared-- OK. And I'll take the t and multiply it over here, the p2 of t, and then I transfer it over to the other side.

Now, I be a little careful, because I have a 1/c squared here. So I have 1/c squared-- better not do it. I'll get screwed up. Here. I can just do it in parallel to this. Second order p2 of t with respect to t is equal to-- well, the right-hand side is 1/c squared times that-- is equal to minus k squared over c squared times p2 of t.

So now I can get rid of the c squareds, and I have second derivative of p2 of t with respect to t, plus k squared p2 of t is equal to 0. Two second-order, ordinary linear differential equations. Out of the partial range all together. Now we just have to find solutions to them.

Let's see. What functions could satisfy this? Let's look at this one and try to solve this. Any idea what kind of spatial functions could satisfy that equation? How about this function? This is x. Let's say this is p1 of x. Will that do it?

Well, let's find out. Second derivative of that-- well, the first derivative, I can just draw down here. This is easy to differentiate this. You get 1/2, and then you get this function. Next derivative. That's p, that's dp1 dx. Second derivative-- trying to draw it here. Second derivative of that is an impulse, one that goes twice as far down. This is height 1, this has area 1, that has an area 2, this has an area 1. Now, if you can figure out how you-- that's the second derivative, these impulses. The first derivative is a square wave. If you can figure out any way in which impulses here and a square wave here is going to add up to 0, let me know. It's not going to work. Yeah?

SPEAKER 5: Sine k over ct. Sine k over ct?

DR. AMAR G. BOSE: Sine of k. Maybe a sine function would work.

SPEAKER 5: [INAUDIBLE]

DR. AMAR G. BOSE: Sine function is sort of nice in the sense, when you take derivatives, you get sort of sine functions. The first time you take a derivative, you get a sine function displaced, which is a cosine, and the second time, you get one the same. But think of something simpler. Remember of the approach of Professor Chu? The simplest thing that will do it. Simplest thing to that will do it-- yeah?

Exponential, aha. Yeah. An exponential has the beautiful property that its derivative is the same shape. So two things of the same shape have a chance. All you have to do, whatever this thing is, if this term and this term had the same shape in x, then you could just adjust the constants, and you could make one the negative of the other, and you could get it. So this is why, in ordinary differential equations, the exponential is it. No matter what the order of the differential equation is, every term in it has exactly the same shape. And so you can make them add up to 0 very easily.

OK. So let's let this thing, let p1 of x be a complex amplitude, p1, e to the, let's say, rx. We don't know if some exponential-- any exponential has the same shape, the r will give me, when I differentiate, it will come out as a constant, and give me some freedom to make these two terms match up.

Let's similarly let p2 of t equal some exponential. Some complex amplitude p2 e to the gx, let's say. g is a constant, as r was. Only reason to put it in is so that when I differentiate, I'll get some freedom here. Yeah? Oh, thank you, thank you, thank you. p2 of t qt.

OK. So let's do it. Let's try this. So let's put this into this equation. Differentiate twice. Each time you differentiate this expression-- this is a constant. Derivative of e to the rx is just re to the rx. So you get r squared p1-- for the first term here, putting this into here, p1 e to the rx plus k squared over c squared-- p1 of x, well, that just is p, cap p1, e to the rx is equal to 0. p1 is common to both terms, e to the rx is common to both terms. So if I want to get a nontrivial solution, i.e. one in which p1 isn't 0, I must have r squared plus k squared over c squared equal to 0. Or that implies r equal to plus or minus jk over c, where j, of course, is square root of minus 1.

So it says what? For a given k-- we said this thing could be an arbitrary constant there-- for a given k, the rs that will work in here, that will be solutions, will depend upon that k. So you give me k and I give you r and the thing works.

Over here, what do we get? I could write down the solution, wait a minute. p1 of x is equal to p1 e to the rx is equal to some complex amplitude p1 e to the plus and minus, since r is plus or minus jk over c. We'll get rid of these pluses and minuses later on.

OK. Similarly for the second one. Which is another second order ordinary differential equation, plug this into here. Follow all the steps down. We get second derivative, we get g squared p2 e to the gt plus k squared times p to e to the gt equals 0. Nontrivial solution says, g squared plus k squared is equal to 0, or g equals plus and minus jk. So p2 of t is then equal to p2-- let's see. e to the gt, and g is this. e to the plus and minus jkt.

Now the total solution, we remember, was the product. Namely, we started out over here, and we made the assumption that p of x and t is the product of these two. So finally, we have p of x and t is equal to exactly what we started with-- p1 of x, p2 of t, which is equal to p1, p2, e to the plus and minus jk-- oh yes. x I forgot over here. I put in the r, which was jk over c, and x-- there. OK. So p1 is equal to jk, let's see if I can do this all at once, plus and minus, this is t, plus and minus x over c is the general solution. In other words, I multiplied this function and this function together, got this. And the plus and minus signs all over the place-- they take care of all the combinations. So here is a solution.

Now, for those of you who have been involved with systems or signals and systems let's get this in a little bit of shape, a little bit closer to what you've seen in terms of symbols. This k, remember, is an absolutely arbitrary complex number. So let's let-- this is changing nothing, but just getting it in a form that you may recognize. Let the jk go to s, which is a complex number. If this is any arbitrary complex number, and this is such a one, I don't lose anything by making that kind of an arrangement.

So then I would have p of x and t is equal to-- and this thing I'll just call some p first. There's one constant and another arbitrary constant, this is an arbitrary constant. So I get pe to the s, t plus and minus x over c.

Now in general-- and this statement will sound like things you've heard before, but it really won't ring clear until we do some of it. In general, what happens is, this is a solution. But when you specify your initial conditions, what happened to some fixed time, maybe time t equals 0, and your boundary conditions-- what happens to the pressure at x, I mean, you may have a wall there, or you may have an open tube-- that's when you determine this.

Now, for different s's, this thing, of course, would be different. For the different s's that could exist in the system, when you apply the boundary conditions, you'll get a different constant. So it's useful to make that thing, just for memory purposes, or reminding purposes, to remind you that that's a function of x, and just write p of x and t is p of s e to the s, t plus and minus x over c.

OK. Now, that doesn't quite look like what we did. Let's see. Where is that thing? Maybe it's gone. Under this one? Let's try it. Oh yeah. p of x and t is t of t-- I used the minus, and you could go through the same argument with a plus. So it's p of t plus or minus x over c. This is an exponential of the same argument.

That doesn't quite look like any function. That solution told us, hey, anything you do, any p of t minus plus or minus x over c is a solution. This one says, well, this is a solution.

So now, how many of you have not had a course in Fourier? Course meaning, have not used Fourier integrals in any subject? OK, just a couple. All right. What I'm going to do-- just two, so we're lucky. But don't worry, it's not required.

What I'm going to do is just open the door a little bit so you can peek through something that is so fundamental in system theory of any discipline. So this detour that we're going to take-- just sit there and see if you follow the detour. Don't bother copying it down or anything. I mean, that's my advice. I don't care if you do. But I would like you just think about it.

Let's see. I don't need this anymore.

Let me write down a couple of exponential transforms. I don't know if this is the form you've seen them in, but don't worry about the overall form. f of t is equal to 1 over 2 pi j, integral minus j infinity to plus j infinity, of f of se to the st ds. And f of f is equal to the integral from minus infinity to plus infinity of f of te to the minus st dt.

This is a pair in which you can take an f of t, and you come out with this thing, with this function in a different variable f. You can take that function and put it back in here, and you come out with a time function again.

These play an incredibly important role in all of linear theory. Maybe we'll talk a little bit about that later. But I just want you to look at this as-- let me rewrite it. 2 pi j integral minus j infinity to j infinity of f of f ds e to the st.

Now, you all know that an integral is a limit of a sum. This integral can be thought of-- before you got to the limit-- as a sum of exponentials, of different s's, different values of s's. The amplitude of the exponential is this, of each one. In other words, if I broke the frequency-- this, for those of you have seen it-- as you know, this is the frequency domain, so I'll just use that term. If I broke the frequency domain up into large intervals, this ds, delta s, would be pretty large. As I begin to get it finer and finer and finer, this gets very, very small. The amplitudes of each of these terms gives very, very small, but there are a heck of a lot of them. The finer I divide the spectrum up, there are more of them.

So I can think of this thing as a sum of exponentials of value different s. And that makes up the time function t. Any time function. When I say "any," it's clearly something that's transformable, but all the real functions, don't worry, are.

Now suppose I took this expression and let that t go to t minus x over c. Just change of variables. Instead of putting t in wherever it occurs, I put another variable in, which I could call-- well, I'm going to call it this. t minus x over c. Then look what happens. You get f of t minus x over c here is equal to 1 over 2 pi j integral minus j infinity to plus j infinity of f of s e to the st plus and minus-- what did I do? I'll put a plus and minus in, just to do both at the same time this time. Plus or minus x over c. And the ds becomes, still, the derivative-- I can just write it as ds. OK.

So all I did from this equation to this, is let t equal to plus or minus x over c. So what it really says is, here is the solution that we got out of these equations. If the system is linear, anything I put into a linear system is going to come out with e to the st. Same s that you put in. So I'm going to get a picture out like this. So I can make up any time function of the form p of t plus and minus x over c by a sum of these terms, because over here, I could see that. Namely a sum of exponentials of amplitude dsfs, like here, give you any time function f of t. So I have a sum of, here, exponentials. I have an exponential, but a sum of those is capable of producing any time function of the form p of t plus and minus x over c.

Now, the real reason that Fourier was-- well, I'll spend a little more time. The real reason that Fourier was developed-- or I shouldn't say developed. It was done by mathematicians, not engineers. But the real reason it came into so much use originally was simply that, here's a linear system. You put in an x of t, you get out a y of t, and you know that if you have an impulse response h of t, that's a response of this thing to an impulse, that you could get y of t as the convolution integral of h of tau, x of t minus tau d tau from 0 to-- [UNINTELLIGIBLE] h of t starts at 0. 0 to infinity. OK.

Now, that is a nuisance. To put an x of t into any system, be it an electrical network, be it a mechanical system, whatever it is, and do this integral, is terrible. Before computers, it was darn near impossible, and computers and this integral are mortal enemies, basically. It consumes an enormous amount of time to do this stuff.

And so what happened was, engineers grabbed to the Fourier theory, and what they did was went down here and took the transform, and got an x of s-- frequency domain. Then you go across here, and instead of using h of t, you use the Fourier transform of h of t, which is h of s, we'll call it, which you know is the transfer function or whatever you want for the system. You come out with a y of s, and you go back up with the inverse transform, and you get y of t.

Now, what a circuitous way. The only thing is, it's just like vectors. This is tabulated for all sorts of interesting wave forms. The transform is tabulated. And to get from x of s to y of s, the complex amplitudes of these, y of s over x of s is h of s. In other words, it's just multiplication. You multiply the transform, which is really easy to do, and then you go-- this is tabulated for the kind of systems people analyzed in the '30s and '40s and the '50s-- and you go back and get y of t.

So going this circuitous route enables you to get y of t. However, what happened when people started going this route, is engineers developed a heck of a lot of insight in this term. They would notice, for example, that certain kinds of wave forms would have a transform that sort of the died out after a certain point in frequency. And s is frequency, angular frequency. And so aha. That said something, that to pass that wave form through this system, accurately, you only needed a system which had a bandwidth that was pretty flat over about this much. You didn't care what the heck it did after that. Because the signal didn't have any energy up there.

So today, engineers have all their insight here. Very seldom is it in the time domain. You use the time domain primarily for nonlinear systems, because without superposition, you had it. In other words, there's no point in saying you can make up any wave form by a sum of these wave forms if, when you put the sum through, the system doesn't give you the sum of the outputs to each component. It's all over.

So when people are working today on computers with nonlinear systems, they sort of use their intuition, and they change something here, and they put the time wave form in and see what comes out again. Most of the uses that you have for all of this stuff, in linear or nonlinear, are time wave forms. That's what you want. But in linear, your thinking is so facilitated in this domain, in the frequency domain.

Questions on this before I leave it? OK.

Now to remind me--you can put a-- what the heck was that? It's a p of s, thank you. OK. From here to here. Let's see.

Now, just a little bit about this form of the function t minus x over c. We found we could get a general solution-- p of x of t is any function of t minus x over c. Let's just take a quick look at such a function.

Suppose I had a function-- you see, first of all, just remember that a function-- very simple. It doesn't matter how many variables it has in here. Whatever this argument is, that determines what the value is. So suppose I had a function which where, let's say, 0 to maybe height 1. This is p of t, 0 and t. In other words, p of x and t, but x is equal to 0 would be t of 0 and t. And maybe this came down at some point a.

Now, suppose I look at that function not at x equals 0, but at x equals something else. Let's say x equals a. p of a and t. a is a constant. What's it look like? Well, p of a and t. How do I--? There's p of 0 and t. I want p of a and t.

It translates it this way. Let's see. This function that we had, when the argument was 0, what was in here was 0, was just starting. Before that it was 0. After a it was 0. So when the argument went from 0 to a, we had something,

So if I wanted to plot this thing is a function of time for some other x, I have to get to the point where this-- when t is very small, when t is 0, the argument is negative. Well, negative argument is nothing, huh? So I get to the point where this is equal to this, and then the whole show begins.

So let's see. I get to the point where t is equal to a/c, and the thing starts up. And it ends again at a/c plus whatever it is, a. In other words, I started out the argument with 0 here, and that's when the function started. The argument, if I advance in x, I have to go to a t that's equal to whatever I put x to. I put x to a before it starts, and so it goes. And that goes a/c plus a, and then it goes down. Plus, let's see. Just a moment. Is that right? Yeah. a/c, it starts. And then when t goes a more, it's--yeah.

Now, what does it say? That for a wave like this, in time and space, if I moved further down the x-axis, I would have to wait for the thing to come along. And the same thing came along. That's what you call a traveling wave. Attenuation in this subject won't come until much later.

So the same wave moves down. Now, how fast does it go? Well, if you wanted to, the way you would find how fast it would go, is you'd take any point on this wave form, and you'd see how fast you'd have to move down the x-axis to stay on that point. Well, the easiest point is 0, and as that thing travels down, just follow that point. Well, follow the point for the argument equal to 0, that's pretty easy. x equals ct. And so c is, in fact, the velocity of the wave going down there.

Now, if you change this signs to plus-- remember I said you can have it either plus or minus, and the same thing holds. If you change it to plus, when you advance tx, you have to get to a negative x for the function to arrive. And so with a plus sign in here, this is a wave that moves that way. With a minus sign, it's a wave that moves that way. These are travelling waves.

Now, it pays to play around with this. It's very easy to get confused, and it pays to play around with this. A few examples just on your paper one evening, and that will straighten it out.

Now, what I'd like to do is bridge two disciplines right now. Namely, normally, lumped parameter disciplines are always taught over here, and wave disciplines are over here. You get your, in electrical engineering, lump parameters, are [UNINTELLIGIBLE] c network. In the field theory courses, you get waves. But the two are so close, and one is derivable, as we'll see later, from the other, and even the notation-- you think this notation looks so different than networks? Let's take a look at it.

Take this expression here. We could write this expression this way. p of x and t is equal to p of-- watch out now-- x and s, e to the st, where-- doesn't go up. Where p of x and s is equal to p of s e to the s, p to the plus and minus s, x over c.

So this says, the pressure in this wave-- or this could be a voltage wave as well, or a magnetic field, or-- it says that it's some complex amplitude. When you specify the frequency, and you specify where you are in space, this is just a complex number times e to the st.

Now look. Let's take a look at networks. And if we had derived networks in a different way-- let's say, not derived, but if we had set up our notation for networks in a different way, what you're going to see is it's identical to what we have in the wave situation. Let's suppose we have-- let me just draw a network here. And so on.

Normally there's a source from v0 of s. And when you're dealing in the frequency domain, this might be a voltage source. this might be a v1 of s. Clearly depends on the frequency of the source, what the complex amplitude is here. This is a v2 of s. This is a v43 of s, et cetera.

Now suppose-- that's the way you did it. You could calculate the transfer function from here to here and get v1 of s, which is the complex amplitude of the voltage at this point. Multiply the complex amplitude times the e to the st, and you have a solution. If you happen to put in real part of it, take the real part, and you have the time function.

So suppose, instead of using this notation, we had said that the voltage is out here where functions these complex amplitudes were functions of x and f. So the voltage out here would be specified in x where you were would be v of x and t. In other words v, v1 of x and t-- of x1, sorry. v of x1 and t would be v1. v of x1 and s, e to the st would be this voltage here, v1 of t. v2 of t would be v, the same complex amplitude, a function of x1 and f. x2 and f, e to the ft, and so on.

That is the exactly what this is. You specify that point in x, and that, with the frequency specifications, gives you the complex amplitude that multiplies e to the st and e to the st, because you're dealing with linear systems, is always the solution. So e to the st times the complex amplitude gives you this. And we could have treated network theory exactly that way. So each node was specified by a position. x1, x2, x3. If the network were spread out around the floor, could be an x and a why. With a three-dimensional thing, could be xyz.

So really, this notation and waves are the same. They really are the same. The only difference is-- and it's only a difference because of the way we originally presented network theory-- is that the complex amplitude, instead of just being a function of s, as you have seen it in network theory, is now a function of space in s. OK.

Now, this is nice. It's a scalar. We have to deal only with this complex amplitude. Suppose that it's one-dimensional. suppose it where multi-dimensional. Suppose I had an xyz, three-dimensional. The differential equations are linear. The solution is always going to be e to the st. And so in more dimensions, I would simply get p of xyz and t. In other words, now pressure, instead of just being a function of the position in time along the x-axis, is a function of x, y, and z and t, but you know right away it's a complex amplitude function of x, y, z, and f, e to the st. End of story. So any of the solutions are going to come out e to the st times something, which when you pin down your point in space and you pin down your point in frequency, this is a complex number.

Now, worst of all worlds is a horrible thing called-- oh, no, no, no, no no. Not the worst all worlds yet. Suppose-- let's see. We're getting to it. Suppose we had a vector. You ever heard of complex vectors? No? Usually, when you first hear of them, they scare the heck out of you. First of all, the word "complex," that's pretty bad. And then "vector" is another one. Well, if you had a vector like u, for example-- u as a vector of x, y, let's say, z and t, what would happen? In a differential equation e to the st is always the solution, the time behavior, and there's a complex amplitude out in front of it. And the complex amplitude-- don't get scared now-- is a vector of x, y, z, and f.

And this isn't something from memory. You just know that it's a linear differential equation. It's got to come out this way. Now, what is this? OK. If you looked at, this u of-- it's called a complex vector because it's a complex amplitude that happens to be also a vector. So it's u of-- I'll just write it down. Oh, I think I need a little more space to write that down.

u of-- I'll do it in rectangular coordinates, as we did. u of x, y, z, and s is u sub x, i sub x, u sub x of x, y, z, and s plus i sub y is the unit vector, u sub y of x, y, z, and s. Plus finally i sub z, u sub z of x, y, z, and s.

Now in other words, this is a scalar, and it's just like the situation that we had up here with pressure. Each one of these is just a scalar, so simply add 3-- you just have three of them now, in three dimensions. And all of those are complex amplitudes. They are all complex numbers when you specify space and frequency. And they just happen to have a direction associated with them. So that's what a complex vector is.

It's good to experience for yourself once, just a couple of numbers in something like that, so it takes the fright away. But there is no fright that needs to be associated with it, because it is a very simple extension of that.

Let me see. Hang on a second. I erased what I'm supposed to do. We're done? OK, any questions? Yeah? No? Just stretching. OK.