ROUGH DRAFT 9-13-11, Outcome Measures for Centers for Independent Living – An IL NET Resource Presented by ILRU >> MIKE HENDRICKS: How we doing boss? Okay, we are ready with our online friends and we're going to start talking about what may be the absolute heart of this whole process. So I don't know what to say other than to say it again. This is really important. This is really really important stuff. This is the indicators. You will see in about half an hour why this is so very important, I think. First of all, let me ask you a question. How many people have nice hotel rooms? Okay, what makes it nice? Anybody. >> Roll-in shower. >> You have a roll-in shower. >> Comfortable bed. >> MIKE HENDRICKS: You have a comfortable bed. >> Accessible room. >> MIKE HENDRICKS: You have an accessible room. Sorry, I didn't hear. >> Someone said it's clean. >> Wireless Internet. >> MIKE HENDRICKS: Wireless Internet. >> Air conditioning. >> MIKE HENDRICKS: Air conditioning. >> No bed bugs. >> MIKE HENDRICKS: No bed bugs. Where do they normally make you stay? You need a, well, you need a CIL that is not so cheap. Yeah, uh-huh. >> We have cable. >> MIKE HENDRICKS: You have cable, nice. >> We have furniture. >> MIKE HENDRICKS: Okay, if I counted correctly, seven different things that made a room nice. I best we could have 17 if we kept going. When you go to look at a room, do you look at the look or do you look at these seven different things or 17 different kinds of things or whatever it is? What is it you look at? The room or different aspects of the room? >> A general look and you identify individual things. >> MIKE HENDRICKS: If you can identify what it is, you look at those aspects. Here is what I want to say. Nobody is measuring outcomes today. Nobody ever has measured a outcome. Nobody ever will measure a outcome. That is pretty radical for a session on outcome measures and CILs, right? You do not measure outcome outcomes. You measure --. >> Indicators. >> MIKE HENDRICKS: Indicators of outcomes. You don't look at a hotel room. You see if it has a roll-in shower. If it's clean, if it's accessible, if it has beg bugs. You look at different aspects of the those things define what you mean by a nice hotel room. Add them up, that is a nice hotel room for you. Okay? Same thing with the outcomes. You will not be measuring your outcomes. You will be measuring your indicators. Pat, uh-huh. >> I think the bed bug issue, it really does bring up something that I have often wondered about. The prevention of bad things from happening to people can actually be a statement about the power and the ability of the activities that we do. But it puts you in a position of knowing what the bad things are and having some baseline, then showing how your interventions can stop bad things from happening. I know we do it at centers all the time because we talk about how many people did not go to nursing homes on account of what we did. But how does the larger world of program evaluation think about measuring the absence of bad things as opposed to the presence of good things? >> MIKE HENDRICKS: Good question, Pat. May I suggest you are kind of talking two different concepts here. One is the measurement of it, and the second is the analysis of it. Those are two different things. The measurement I think you probably would agree is pretty straightforward, are there bed bugs or not, where is the person living. The measurement is the not tricky. It's the analysis. We'll get to that later in the week. You bring up a good point. Sometimes we want to achieve things, and sometimes we want to prevent things. Both are desired outcomes or could be. It's up to you. Both could be desired outcomes. Entirely your choice completely. Yep. Okay, I have just said you will not be able to measure your outcomes at all. You have to measure your indicators. That is the next step on our yellow brick road, step four is measurable indicators. What the heck is a measurable indicator? I think you know this in your heart, in your gut, intuitively. Let's see if we can't be a little more specific about it here. It's the specific item of information that tracks or indicates a program's success on an outcome. It defines, oh, this is something I would underline if you were you. I would underline that word defines right there. I think if you can get it in your head that an indicator defines what you mean by the outcome, you're a long way down the road. That is a really good way to think about it. The indicator defines what you mean by the outcome. Whether it has a roll in shower defines for our friend here whether that is a good room. Whether it's clean defines whether it's a good room. Whether it doesn't have bed bugs defines whether it's a good room. That is an awfully good way to think about it. The indicators define what we mean by the outcomes. In fact, I would go as far as to say that outcomes are really warm fuzzies. That is the way I describe outcomes. They are warm fuzzies. We could almost sing kumbaya as we read some of them off. They are nice. Don't get me wrong, I'm not making fun. There's not enough meat yet. They are just consents, warm fuzzy concepts. It's the indicators that start to put the meat on the bones. If I define a hotel room as having a king sized bed, on the first floor, and, I don't know, not next to an elevator, and you defined a nice hotel room as on the 15th floor with two double beds and near the maid station, hey, we have defined it completely differently. We're going to come up with completely different answers on whether our room is nice or not. That is real important concept to have in your head. The indicators define what we mean by the outcomes. It also shows how much the outcome is being achieved. It's often expressed as a number and percent of participants achieving it. You will see that. We'll get in a bunch of examples here. Let's get this down. Outcome versus indicator. Outcome versus indicator. Here is a outcome. A outcome is a benefits, we talked earlier, benefits for participants during or after involvement with the program. An example outcome might be parents read to their preschoolers. Good outcome, right? Target, parents, present tense verb, read to preschoolers. Behavior change, one of those seven magic words. A fine outcome. We can't measure it yet. How about an indicator. Let's think of an indicator. The specific information collected to track the program's success on that. For example, maybe what we actually measure is the number and percent of parents who read to their preschoolers at least three times per week. Let me back up. If that is all you had and someone told you, go out and find what percentage of our parents are doing that, would you be able to go and get that percentage? If that is all you had right there. >> Quick question if I may. If you go back to the previous screen, the fourth bullet, isn't that a outcome based on your earlier definition? >> Isn't that a? >> An output? Based your earlier definition. >> MIKE HENDRICKS: No. This is participants achieving the outcome. Participants achieving the outcome. This is not how hard the program is working. Remember outputs is how hard the program is working. Output might be, you know, number of parents we're working with or number of hours we're spending with parents or something like that. No, this is definitely the number and percent of participants achieving the outcome, definitely. Let me go here again. If I said to you here, I'll give you a dollar, big spender, a dollar to go out and count how many of our hundred parents are reading to their preschoolers, could you earn that dollar or not? >> Yes. >> MIKE HENDRICKS: How would you do it? Someone said yes. How would you do it? I don't think you can do it. You don't know what to measure. What if I said I'll give you a dollar if you will go out and count the number and percent of our hundred parents who read to their preschoolers at least three times per week. Could you do that? That you could do. That is an indicator. Sir. >> I think you can go out and get your dollar for the first one, but the results wouldn't be very meaningful. The more specificity in what you are asking for, the greater meaning you have in applying, you have defined the parameters of meaning that you're seeking. >> MIKE HENDRICKS: I like the way you're going. Can I ask you to follow it up. Why wouldn't this be specific enough? >> Because everybody would answer yes. >> MIKE HENDRICKS: Might well answer yes, yes. >> Everybody reads something, I mean honestly, you know. >> I hear you, that is good. Even if they read once a month, they could legitimately say yes. But we're not defining that as a success on this outcome. We're defining it as once a month does the work. It has to be at least three times per week before we count it as achieving that outcome. This is the way we're defining that outcome. Much more specifically. Let's look at another example. This will start to hopefully make more sense. Here is that program we saw earlier, okay, the logic model we saw earlier where we had the meeting with the at risk teens and the six different outcomes going up here. Let's look at some indicators that we might use to measure these outcomes. At risk teens complete home work regularly. Can't measure that. I don't have a measure for that. But if you ask me, go measure the number and percent of teens who finish their home work at least 3 days out of the week, that I can measure. At risk teens earn better grades, pretty straightforward. Number of percent of teens who earn better grades in the semester after the intervention than before. Pretty straightforward. A very obvious indicator. This one, at risk teens achieve passing grades. What is a passing grade? In this case we defined it as a C or better. Absolutely, overall. How about attending school regularly? I would say you can't measure that because you don't know how to define it. If you said number and percent of teens who attend school at least 80 percent of the time, now you can go measure that. Meeting district attendance requirements, pretty straightforward. They simply avoid attendance problems. Then graduate from high school percent. Number and percent of teens who receive a diploma on time. That on time is important. You know, doesn't say receive a diploma eventually. It says on time. That is very specific. So you have to count a very certain way if you're going to be measuring them to receive a diploma on time. That helps you know what to measure. Go ahead. >> This might be much later, but when we're presenting outcomes, aren't we really presenting indicators. >> MIKE HENDRICKS: Oh, I love you. Absolutely. Absolutely. >> That is really what we want to present are the indicators. >> MIKE HENDRICKS: Let me turn the question back to you. What do you have to present? >> The indicators. >> MIKE HENDRICKS: Duh, there you go. That is what you will have the information on. You will have information on the indicators. Which is why, remember I said this is the heart of the process so much? What if you have chosen bad indicators for your outcomes? Are you okay or are you screwed? You are in trouble. The choice of indicators is very very important. It's why I said the outcomes are the warm fuzzy skeletons with no meat on them. It's the indicators that are actually what count. It's the indicators is what you measure. The indicators is what you report. The indicators that define what you mean by that outcome. So absolutely. That is a very good, thank you. Let's look a little bit more. We need to understand this in our bones, speaking of skeletons. Okay, let's look at some example outcomes and indicators. Smoking cessation class. Participants stop smoking. What does that mean? You, these people have defined it as people who smoke zero cigarettes in the week following the end of the course. How else could they have defined it? What other indicator might they have used? Make up one. Yeah. >> You could make it anything really. You could make it somebody who stops smoking for 3 weeks after the end of the course. >> MIKE HENDRICKS: Why not. Why is 1 week better than 3 weeks? >> Somebody can maybe last a week. >> Sounds like a reformed smoker. >> Beyond that the, it loses its potency and all the behavioral things that have to change kick in that desire. So chances of you lasting past a week is slim. So that is a great time to measure, to have your indicator or measure your outcomes so that you show a high rate of success with your indicators. >> MIKE HENDRICKS: You're saying perhaps that indicator is too easy and is perhaps going to make the program look more effective than it is. If you had said following 3 weeks of the course. Wow, look what you just said. You just said I can manipulate my findings based on what indicators I pick. Yes, my friend, you absolutely can. You absolutely can. Which is why the choice is so important. >> So indicators need to be tied to making progress relative to the status quo of the community. >> MIKE HENDRICKS: Indicators need to be --. >> More effective in demonstrating your progress. >> MIKE HENDRICKS: Now remember, I like to think of indicators as Christmas ornamee in nts on a tree. When ever you hang it, you hang it once. A indicator is always tied to a outcome. The key is to have that indicator capture exactly what you mean by that outcome. That is the trick. That is also the difficulty. What do I mean exactly by this outcome. >> I have a question. >> MIKE HENDRICKS: Please. >> About whether or not 1 week versus 3 weeks depends sometimes on where you are and why you have the smoking cessation class. If I run, let's say, a homeless shelter and I just want them to get through the week, they are going some place else after that and they can't leave unless they have stopped smoking for a week. I don't know. That really is okay, the 1 week. Because I can just, I didn't do it to manipulate my numbers. I did it because truly that is all I care about. >> MIKE HENDRICKS: Yeah. >> Also if you are taking it out of the smoking thing, if you are in an environment where the transient population, you may be able to keep up with them for a week. But after a week, they disappeared off the face of the earth. So if you promise you're going to follow them for 3 weeks and that is not legitimate, you're setting yourself up to just not be successful because you have promised more than you have the control to deliver. >> MIKE HENDRICKS: That is an interesting very important point that we're going to get to in just a second. One of the important traits of an indicator is that it needs to be measurable. We will talk about that in a second. Indicators are what we want to measure. I saw some other hands earlier. Sir. >> I want to ask kind of a follow up to what you are talking about here along the lines of your initial interview, long-term goals. >> MIKE HENDRICKS: Outcomes, please. >> Sorry, my bad. Outcomes. >> MIKE HENDRICKS: No, no. >> And also your if-thens using the smoking cessation one. Can't we say our number of percentage that stops smoking for a week, then say if then, the number of persons who continue to not smoke after 1 month, and if then 6 months. We may want to say short-term this program is very successful, but long term it starts to have less effectiveness and possibly needs to CHIENG change to achieve a long-term. We can have multiple indicators off a single outcome, can't we? >> MIKE HENDRICKS: It's much much better to think from your logic mind. What is your logic model? Is it participants initially stop smoking? And that leads to participants, just making this up obviously, participants see the value of not smoking. Long term participants stop forever, something like that. If that is the case you have two places where you're seeing if there's still smoking or not. Down here and up here. But you still have different indicators for different outcomes because you need to have an indicator for each outcome. You see what I mean? Much better way to handle it, may I suggest, is to think through the logic of what you want to happen, lay out the different desired outcomes and different steps, and put an indicator for the different outcomes, yeah, much better. That way it makes you think about what do you want to happen in the world. Okay? Let's try another one. Oh, right here. English as second language instruction. Okay, participants become proficient in English. I would have probably worded it participants are proficient in English. Great desired outcome but I have no idea how to measure. If you said to me number of participants who score at least a certain number on this TOFEL test, a test of English language ability, given on last day of course, that I can measure. That I can measure. Okay tutorial program for sixth grade students. Students's academic performance improves. I can't measure that. Not sure what you mean by improves. But if you said number and percent of participants who earn better grades in the grading period immediately after the program ended than in the grading period immediately before the program began, that I can go out and measure. Employee assistance program. Please recovering from drug and/or alcohol problems retain their jobs. Nice outcome. No idea how to measure it. But I do know how to go measure the number and percent of participants employed at the same company 6 months after beginning our program. That I can go get. One more, pregnant women follow the advice of a nutritionist. I have no idea how to measure that. Okay, I do not know what that means. But number and percent of participants who take recommended vitamin supplements and calcium at least five of 7 days per week during their entire pregnancy. That I know what it means. That is a definition of the outcome that I can deal with. Sorry, one more. Volunteers create clean, drug free play areas. Number and percent of vacant lots free of litter, have grass or other appropriate ground cover, have play equipment, and are free of drug sales and/or use. That I can get my hands around. So it's the issue, yeah, please. >> Last, sorry. On the last one, is it going to be confusing if you have some that may have grass, some have play equipment? >> MIKE HENDRICKS: Smart man. >> Mix and matching. So what constitutes a success if you, let's say, all have grass but only one is free of drug sales. I'm saying how do you determine. >> MIKE HENDRICKS: Obviously a very estimate guy. You tell me. Read the wording of it and tell me what would happen in that case if some of these vacant lots fit some of the criteria but not some of the others. What do you think it would be? >> I don't know. >> MIKE HENDRICKS: I'm focusing on that word right there. >> No, it constitutes all that, correct? >> MIKE HENDRICKS: Sounds a toe me the way this is wording, the way this is worded, it has to be all one, two, three, four parts of this, doesn't it? Four part indicator. Sound like they have to meet all four parts before you give this one a yes. Reads that way doesn't it? >> Yes. >> MIKE HENDRICKS: Maybe that is not good, but I think that is the way it works. >> Yeah. >> MIKE HENDRICKS: Yeah. Okay. Again I have a couple puzzled looks on faces I'm not entirely happy with. Are we not getting this? Am I doing a bad job of getting this across? Help me out. We're here to learn this. >> BOB MICHAELS: Mike. >> I'm looking at the two-part outcome. Per your comments on the logic model previously, shouldn't this outcome be better phrased by having it as a two step? First you have a clean area, second it be drug free? Or vice versa? >> MIKE HENDRICKS: Another smart guy. This is where they talk about unidimensional outcomes. You really want to have a outcome focused on one thing. This really has two dimensions doesn't it? It has clean and drug free. This is not the best outcome, and you're smart to see it. Yep, yep. Uh-huh. >> Is it possible for good behavior, bad behavior, to have one outcome and several indicators on it. >> MIKE HENDRICKS: Another good question. Sometimes you have to. It's not always possible to have one indicator fully define what you moon by a outcome. Like let's just make up one. Children are happy. Okay? I'm not sure TLOOIS there's one indicator that is going to capture children are happy. That might take, you know, kind of multi dimensional kind of measurements in there. That is quite fine, not a problem. You will see in fact, Bob will be showing us in a bit the exact indicators we used for those eight outcomes. Remember Bob showed us we win ohhed it down from sixteen to the eight we're measuring. Some of the eight only required one indicator and some required two, just exactly for that good reason you just mentioned. Yep. Okay. Let's move on then. >> BOB MICHAELS: Mike. >> MIKE HENDRICKS: Sir. >> BOB MICHAELS: Usually this is what we get a question about, why do you do number or percent. >> MIKE HENDRICKS: Ah. Why do we do number and percent. I'm not sure. Because we have to count. Let's take an example. If we have 50 hotel rooms and we want to know how many are quote/unquote good, we would like to know how many, like is it 30 of the 50 or 40 of the 50 or whatever. Sometimes if we're only measuring five of something, you know, three of them can be 60 percent but it's only three of five so that is kind of misleading. That is why we do both numbers and percentages because sometimes the percentages can be misleading if the numbers are really small. So it's traditionally good to do both the numbers and the percent. So that lets us know how much success we have had on that outcome or on that particular indicator. Does that help or not? >> BOB MICHAELS: Yes, exactly that. >> MIKE HENDRICKS: You're going to do this in a second and Bob is going to tell you what the task force did too. Somebody mentioned earlier the measurability. Can it be measured. You have probably seen this. How many have seen this? Good indicators are smart. Have you seen this before in other TRAVENG training programs? >> Objectives. >> MIKE HENDRICKS: Good indicators are smart, specific, measurable, chiefable, irrelevant, timely. You will have -- relevant, timely. You will have this to take back with you when you get home. This page will be useful when you're back at your CIL and writing your own indicators. Let's be sure we know what we mean by smart. By specific, is it clear exactly what is being measured? Often they say would a total unfamiliar person reading this be able to understand exactly what I'm supposed to go measure. That is a good test. Second one, measurable. Can the necessary information be gathered with an acceptable amount of effort. I think that was the question earlier, right? Would you have to knock yourself out to get this or could you actually get it. Does need to be measurable. Achievable. Is it indicator some where between too easy to achieve and hopelessly out of reach. For instance, world peace would not necessarily be a good indicator. Probably not going to happen. On the other hand, I don't know, you can think of something really easy probably. So learn to be achievable. Relevant, does the indicator capture the core essence of the desired outcome. If I were running a weight loss clinic, what would be one of the the most relevant indicators I could have. >> Weight loss. >> MIKE HENDRICKS: How much people weigh, yeah. That would absolutely be relevant. Timely. Is the indicator likely to move enough during the designated time period to provide useful information. Does it make sense if you're working with an after school program, to measure the quality of the kids' grandchildren? Probably not. You're not going to be around when they have grandchildren so that is not doing to work. These are the five things that mean smart, like I say, this will be useful when you go back home. Now, the task force, Bob is going to tell you, was facing this exact same problem. We have these outcomes, how the heck do we measure them. 10 minutes until the break. We can start now or as you like, Bob. >> BOB MICHAELS: Why don't we take a break now. >> MIKE HENDRICKS: Bob wants to take the break now, is that okay? >> BOB MICHAELS: Let's take the 25-minute break. >> MIKE HENDRICKS: Okay, 25-minute break until 3:15 and then we'll start back. >> BOB MICHAELS: We'll be starting again in about 2 minutes. Okay, I want to get started. Appreciate everybody coming back to quickly. This is great. And the people watching from a distance. When I think about indicators, I always think about standards and indicators and what they are supposed to be and what they are supposed to reflect. How many of you deal with standards and indicators every day? Probably only about a third of the hand have gone up here. Really is one of the areas that we as centers for independent living should be aware of. We have standards that we're supposed to meet, indicators that, as we said, we try to implement those standards. The best way for me, when we first started talking about indicators, I was able to look back at indicators and think of them in terms of centers and understand what indicators meant by how they related to the centers that we have or the standards we have out there. For instance, first standard is related to philosophy. They say in order to practice philosophy, you have to be considered with control. All it says related to that. But the indicator says if you're concerned with control, that means that, you know, a majority of your directors are people with disabilities. You know, so from that then we started defining what it is you need to be and have an understanding. We had arguments then with how do you tell that somebody has a disability? Do you need a doctor's exam or do you need a D after the name? You know, what do you need. So we went through that process of trying to help them define further their indicator. Understanding how that indicator related to the standard was really helpful for me in understanding the indicators that we have here that help implement the desired outcomes that we're talking about. Having said that, what we did, we went out and tried to develop indicators for this. You're going to find as we discuss this, there are a couple definitions that we have to deal with on a regular basis. You will see in some of the information written. For our purposes, a consumer is a person who has a consumer service record. A CSR. An I&R caller is a person that does not have a CSR who contacts us for information. Then a client is either a consumer or an I&R caller. That was, we found it was necessary to put those definitions early on. Any how, we went ahead and did that. We went through a process of trying to define the indicators and come up with good indicators. We went out as we tried to, always throughout this process, to go out and find out from the community, from the independent living community, what they wanted in each case. This is the toughest part of the whole process. So coming up with these indicators is not something you sit around and do because it's a lot of fun. You do it, we try to do this with some motivation, and Mike offered $100 to the three best indicators, the people who came up with the three best indicators for the outcomes we identified. We did that, we had a competition and it really helped us get to know what people were thinking about and what they valued. People could put thought into it. We sat down at one point and decided whose were best and awarded the $100 gifts to people. Any how, we went through this process. These are the indicators that we came up with. Now, if you look in your book, in your group of slides in color, next three slides in there will indicate, look kind of like this. They will indicate which ones of the outcomes we're using and which ones came with indicators, and what those indicators were. Now, I'm not going to try to read to you every indicator that we came up with. There were 12 at the time, 11 now. There were a lot to read and a lot to understand. When you do have those indicators in your packet, again, and it's a sheet that just says the 12 indicators. I left them back at my desk here. >> MIKE HENDRICKS: I'll HOED it up. It says indicators measured during the 2010-2011 NCIL outcome measures field test. 1 piece of paper and in your packet. >> BOB MICHAELS: That will give you, if you want to look at it, all of the indicators we came up with this year. There will be 11 in there for the eight outcomes. Okay. Let me read to you some of this the. You may not understand. You see we wrote an indicator for people with disabilities to have skills, knowledge and resources to support their choices. We wrote an indicator for people who have disabilities among independent. Let me show you an example of one of those. For instance, on the skills, knowledge and resources, we said the indicator for that would be the number and percent of consumers served by the CIL within the last 9 months of past federal fiscal year who can list at least one specific skill, type of knowledge, or resource they have now that they didn't have before approaching the CIL. Do you want me to read that again? Give you an idea of what we did. You see that we didn't just say did you learn something. That wasn't enough. What we required people to do was then identify something they had learned. Okay. On the I&R screen, the two that we had, before we get the information that we need, people for disabilities advocate for increased community supports. One of the indicators for, people get the information they need, is the number and percent of people contacting the CIL who used a new resource they learned about from the CIL's I&R efforts. Okay? Third one, systems advocacy in that stream, of the four that we had there, there's one, problems identified, customer identity exists, and decision makers act on our agenda. Two for methods and practices promote independence. I'm going to read you the two from the top. I think they are really important. You will see kind of how they are connected to the current sev report. The methods and practices promote independence. First indicator was the number and percent of customers served by the center within the past calendar year who moved out of an institution and into a self directed community-based setting. What do you think we had to do here? >> Define. >> BOB MICHAELS: We had to define. What is a community-based setting? What is an institution? Okay. Second one, the number and percent of consumers served by the CIL within the past calendar year who remained in a self-directed community-based setting on December 31st despite having been at risk of moving into an institution. Okay? These were hard. These were really really hard to come up with. Just defining at risk was tough. We looked and looked and looked and tried to get a definition. Looked at it, looked at the feds, the states, just could not find anything at all that worked for us. So what we finally did is we went to Steve Colt who is the guy who has the lawsuits on the institutionalization. We asked him if he had one, and he said the closest thing I have is the definition that we used in a lawsuit in Louisiana. He gave it to us. You will see, and we will show this to you, it's a real long definition of what at risk is. So it was hard and it was complex. And you have a sheet in front of you that shows all of the 11 indicators for the eight desired outcomes that were identified. Any questions? Okay, it's your turn. >> MIKE HENDRICKS: Can I add a couple things? Be okay? >> BOB MICHAELS: Sure. >> MIKE HENDRICKS: A couple things from my nerdy research perspective. Bob said a couple important things. Since you're not nerds, they might have slid by you. Let me mention a couple. First of all, in your packet you will find this. Its called measuring CIL outcomes. This is important. This is the training manual we used this most reason field test year. We said earlier we wanted to be sure you had all the resources we had. This is the training manual we sent out to the CILs that participated explaining a lot of stuff, including explaining what Bob just said about how the heck are we defining an institution, how are they defining self directed community, and how are we defining at risk, which was a --. >> BOB MICHAELS: A bear. >> MIKE HENDRICKS: And still remains a bear, by the way. The field has not figured out how to define at risk. I'll is a that as an outsider. You guys have a dilemma. Talking about people at risk out of institutions but yet you do not know how to define if a person is at risk. That is a problem may Iing is your field needs to work -- may I suggest your field needs to work on. That is a lot of good stuff in this training manual. Another thing from my research perspective, on this list of the 11 indicators that Bob showed you, you will notice as we talked earlier, some of the outcomes only require one indicator. Some of the outcomes require more than one. Why? It was our, the task force's judgment that that was necessary. >> BOB MICHAELS: Yeah. >> MIKE HENDRICKS: Yeah. Another thing, if I can just point out you may not have noticed, for the first indicator, it's the last 9 months of the past federal fiscal year. And for others of the indicators, it's the past calendar year. That is a different time period. Why did we do that? It made sense. It made sense for what we were trying to accomplish at the time. That is the kind of thinking you're going to have to do, what is the time period that makes sense. Another one if I can point out, as Bob said, it was very very important thing he said and I want to reinforce it. Look at the first one. Who can list at least one specific skill, type of knowledge or resource. This was something the task force wrestled with from the very first meeting. The question was how hard are we going to be on ourselves. That is kind of what it was. Are we going to try to rig it so we look really good, or are we going to really try to put the barrier a little bit high is we know we're actually SKOOED succeeding. Look at the first indicator. We could have say number of consumers offered services by the CIL, Bob said this, I'm reinforcing, whether say they now have new knowledge and resources. The task force bless its heart thought it was too easy, that is not credible, maybe not be accurate. By gosh, we got to be tougher. If I had a hat on, I'd take it off to Bob and the task force because they made a very integrity filled decision to say no, not good enough. We're going to hold our feet a little closer to the fire and only if a person can list something do we count it as a yes. And I take my hat off to Bob. He has been filled with integrity through this whole process. I never got the sense he wanted to rig this to look good. I always got the sense he wanted it to be an accurate reflection of whether we are achieving what centers want to be achieving. I thought there was one more nerdy thing but I can't find what it was. Anyway. >> BOB MICHAELS: All right. So now it's their turn. >> MIKE HENDRICKS: Yes, sorry. Maybe that was it. >> BOB MICHAELS: Yeah. Question here. >> I have a question. I understand that we are doing this so that we can be better at what we do and reporting in and trying to get funds from funding sources is an additional benefit. But does it not sometimes matter how the funder defines some of these? So if I say to a funder, we can use United Way or some of the other foundations that we work with, but our definition of at risk is this. Then they come back and say well, that is not our definition, and therefore, we're not going to fund you, when in reality, shoot, I'll use your definition. That is easy. We kind of run the risk of boxing ourselves in, don't we, if we define it and stick to it that way. >> BOB MICHAELS: Well, I think what you're going to find, you'll define it and nobody else will. You know, but again, if you discover that there's a better way to do it, you know, you don't have to take our recommendation. Do your own. >> The biggest issue we have now is that so many people are talking about education as being a key community need. So we said, well, we provide education. They go, well, but not our definition of education. So it gets really kind of, we need to have a dialogue with them to say this isn't the only definition of education. >> BOB MICHAELS: It's like counseling, who provides counseling. Does the peer counseling provide counseling? I think so. Yeah. You have to decide what works for you. Yeah. >> I have a question for you. If you have thought about the name of the outcomes of choosing to name both of the outcomes and indicators qualitative indicators and opposed to quantitative progressive indicators. In other words, for example, the second outcome you talk about, people with disabilities are more independent. Typically that phrase would be more than what? Was there a change from year two to three to four. These all seem to be a glimpse in this nine-month period, this is what the number and percentage was, but it's not speaking to progression to move in a certain direction compared to that. Have you had conversations about that? >> BOB MICHAELS: Yes. See, one of the things you're going to have to do is decide for your community, for your center, what is success. You know, success is 20 percent the consumers. Then fine, yeah. You would hope as time goes by it would be 25, 30, you're raising that up our idea was never to have everybody do it and then you would look at the center in Wisconsin and say well, they must be a better center than the ones in Idaho because they have 28 percent that do it. You know, you have to, you know, we didn't want to be ever a comparison with one center against another. It's you against you. You're looking at how you did and trying to determine whether or not you do better in time. You determine what your own standard is. Answer your question? Anything else? >> My question is here in Oregon, the CILs have come up with outcome measures they want to report as CILs here in Oregon as aggregate or single number. Given what you said as CILs coming up with measurement for their own area, what they might do, this is a question, sounds like you are suggesting there might be outcome measures for the state, then individual come measures for a CIL? Because we don't want CILs to be compared to one another. And the areas in the state are very distinct and different. >> BOB MICHAELS: Yeah, we have tried throughout this process to protect centers so that we would never get into a situation where you compare one against the other. If the centers in a state allow themselves to have that happen to them, there's not much we can do about that, you know. The idea here was never so that you have a way to compare one centers against another. >> Our ideal was not --. >> BOB MICHAELS: The mike isn't working. >> Our idea was not to have one center compared to another either. Our state director said, you know, outcome measures are what support funding. So if the CILs can give me outcome measures to take to the politicians to show that our services are effective, that would be really nice. So that was our intended goal. >> BOB MICHAELS: Yeah. Do you have something to say, Mike? >> MIKE HENDRICKS: As you can imagine, I wish the state director had worded that a little differently and had said we all want to be always improving our effectiveness, so let's all be measuring the things that matter to all of us. But I know the political reality, so I understand is that. I think I agree with Bob, is if for some reason in Oregon the CILs feel that it would be useful to agree on a common set of outcomes that they are all going to measure in the same way, then who are we to say not to. Entirely up to you. I think the problem you will have is the analysis and interpretation. You said yourself that every CIL works in a different kind of an environment. So what is it going to mean? Are you going to pool them all and take an overall average? In which case what do they say when you mix bananas and apples and oranges, you now have fruit salad, but what have you got? P. What you will have if you take average may not mean anything to anybody. If you keep them separate you will go down the path of one CIL looking better than another inevitably, and politicians are not so good, I call it the degree of difficulty. If you could factor in the degree of difficulty, like diving, it would be one thing. Politicians are not so good at that. They will see a list of CILs top to bottom and there will be ramifications to that. It's the issue, and maybe you have a solution to this. >> I think for politicians particularly, you could do the aggregate number. Statewide this is what happens. Then you don't have to get into comparing. If you are trying to show the bang for the buck. I think it's a good exercise to go through when you have really good factual analysis when everybody is reporting on the same thing in the same way. So you don't end up with fruit salad. That is one of the exercises they are going through to get these results, I think it's beneficial. >> MIKE HENDRICKS: I'm not saying that is not a good way to go. I agree. My concern is if four CILs, say five, I don't know how many there are in Oregon. I should, I live here. Say there are four CILs performing really really well and one CIL for no fault of its own just because of the environment it's in or degree of difficulty, is not, then you're going to have an average that is unfairly pulled down and doesn't represent anybody. I don't think the problem is in the measuring. Think it's in the analysis and interpretation. >> I'm wondering if there wouldn't be some benefit in some numbers like CILs in Oregon were able to place or to help 263 people, I'm just picking numbers, 263 people move out of nursing homes, or were able to help 126 people gain employment. In other words, aren't there some broad numbers that become very useful or provide information to 2776 people, those numbers are useful. >> Yes. >> Back here. >> A question, in the beginning you talked about how we got a bad rating, CILs did. How are we going to incorporate this to change that? >> MIKE HENDRICKS: Can I address that? >> BOB MICHAELS: Go ahead. >> MIKE HENDRICKS: This is an early research question. Let me jump in. >> What she is referring to is the report done by RSA that said CILs are not doing a good job about reporting outcomes. >> MIKE HENDRICKS: Right, trust me it's an early research question. >> All right. >> MIKE HENDRICKS: The study that OMB did was the part, program assessment review tool, PART. This is important for you to know. It did not come back and say you're doing a bad job. It came back and said results not demonstrated. >> Exactly. >> MIKE HENDRICKS: If you read what it specifically said was, we don't know if you're doing a good job or not because you don't have outcomes identified. You don't have indicators identified. You're not measuring them. So you may be doing a fabulous job, or you may be doing a terrible job. We just don't know. So the process that Bob has been leading will solve that exactly. >> So you're working with RSA to identify those or what? >> BOB MICHAELS: We have, we're going to leave that up to you. We have told RSA right from the beginning, we want to have these supplant, not supplement, supplant the current stuff that is in software four. >> (Applause). >> They are the ones providing the reporting tool. It's kind of ironic when they are providing the reporting tool and saying we are not demonstrating. You know what I'm saying. >> BOB MICHAELS: Yeah. I see what you're saying. Yeah, hopefully we will stop this. >> Have they been receptive to your pleas for inclusion? >> BOB MICHAELS: They were initially. There was a period of time when they weren't. The current director appears to be in a good place. She is going to be our guest speaker tomorrow at lunchtime. >> Let's make sure everybody asks her about that. >> BOB MICHAELS: Yeah. But before we ever do, before we say that we want to, we want you to look at this one or that, we're going to come back to centers and say okay, what do you want to do. Do you want to be evaluating? If you don't want to, we understand. But the idea was to identify the ones that work and the ones that don't, the ones that you want, and eventually get that and put it into a 704 report. >> Great, we appreciate that. >> BOB MICHAELS: It's a long way to happen. >> Appreciate your work on that. >> BOB MICHAELS: Yeah. Anything else? >> Bob. I agree with you. The centers in Indiana we did that and had some bad problems. We got together with outside contractor and developed a report for general assembly into our state DSU. It was combined report, not specifically one center or one program or anything. It showed we saved the state $17 million on a two and a half million investment. So there is ways do that. You don't have to be cheap to do it. >> BOB MICHAELS: Hopefully we're giving you some ways to do that. That is to be decided how you want to use it. It should help you along there. I think that the suggestion over there, you know, we have 1700 people in homes this year, or we took this many from nursing homes, that is valuable information. Mike. >> MIKE HENDRICKS: Speaking of work, let's make them. Remember the outcomes management worksheet? Pull it out, please. It's the long one. The one where you wrote down four desired outcomes. Then because we didn't give you enough money, you scratched lines through two, so you should have two left. Well, as you will notice, next to whichever two you have left, there's a space for two measurable indicators, aren't there. Presumably that space is blank for the moment. Your job is to fill that space up. So for whichever two of these you have left, I want you to come up with two measurable indicators for each outcome. That will be a total of four, total of four measurable indicators. Make them smart. I'll go back to that. Make them smart. This is what you're going actually be measuring. You're not going to be measuring your outcomes. You're going to be measuring these indicators. So this is, well, have fun. This is where it gets interesting. Come up with four measurable indicators for us. (Exercise). >> MIKE HENDRICKS: Maybe five more minutes. Some people are making better progress than others. Five more minutes. Okay. All right. Let's talk about your indicators. What do you think? What do you think of the process of developing indicators. >> Good lord. >> Someone said hard. Someone said easy. We should let the easy tables go first. Who wants to give us a outcome and an indicator? One of those tables that said it was easy perhaps. >> Hmm. >> MIKE HENDRICKS: Okay. >> I'll read the outcome. >> MIKE HENDRICKS: The rest of us, what attitude should the rest of us take? We should take a attitude of this is really serious. It is. We should take the attitude of praising each other for tackling this tough job because indicators are not always easy. But I'd suggest we should also take the attitude of constructive suggestions, know, if there's something you think we might add. We are here to learn. This is a workshop. So with that attitude in mind. >> Our first indicator --. >> Read the outcome please. >> I was looking at the outcome. Teens demonstrated increased language skills. >> MIKE HENDRICKS: Okay, teens demonstrate increased language skills. Okay. And since we're not sure exactly what that means, we need to know how you define that, what exactly you're going to measure to get at that, it's going to be --. >> Two indicators. >> MIKE HENDRICKS: One at a time please. >> MIKE HENDRICKS: First is the number of and percentage of students who demonstrate a 1 grade improvement from pretest to post test. So the test needs to be established easier. >> MIKE HENDRICKS: They are going to have a test and give it to them. >> Before the program. >> And give it to them after the practice. What are you looking for? >> One letter grade improvement. >> MIKE HENDRICKS: One letter grade improvement. What do you think? You're the experts on smart indicators. What reaction? >> What if you already have. >> MIKE HENDRICKS: Let's just the mike. >> What if you already had an A? >> MIKE HENDRICKS: What if you already had an A, she says. >> A plus. >> That gets to the difficult definition of at risk, they wouldn't be in the program. >> MIKE HENDRICKS: They are thinking they wouldn't be in the program. Okay. What else do we think? Somebody, anybody? Do you know, is it specific? Do you know exactly, oh, sir. >> There would be a time period involved. >> MIKE HENDRICKS: Say again please. >> There should probably be a time period involved. >> MIKE HENDRICKS: For example. >> A program year or fiscal year or school year. >> I okay. >> Some measure. >> You think you can add some kind of time period to make it more specific perhaps? >> MIKE HENDRICKS: I think it would depend on the length of the program. If it was an eight-week program, might repeat it. For me, I would add that to it. >> MIKE HENDRICKS: How do you have it worded now? Is it right at the end of the program? Is that when you give them the test? >> Pretest and post test. It would show that change. >> MIKE HENDRICKS: An immediate post test. >> Yes. >> MIKE HENDRICKS: Just to be clear about that. Okay. Other comments about it? Is it specific? You know exactly what it is they are being measured. Is it measurable? Is it achievable? Hard to know without knowing how kids learn and stuff like that, but seems it might be. Is it relevant? In other words, does that indicator capture the core essence of the desired outcome? Seems like it. Timely. We think the indicator might move within that designated time period? Yeah, I think we're probably thinking you did a pretty good job there. >> (Applause). >> MIKE HENDRICKS: You had a second one. You'll give us another chance. >> My notes are briefer, but basically the number and percentage of students who pass a designated language skills test after completing the program. >> MIKE HENDRICKS: Hmm. >> A different type of indicator but a passing standard and also movement with the pretest and post test. First one. Second one is just passing. >> MIKE HENDRICKS: What was the outcome again? >> Increased language skills. >> MIKE HENDRICKS: Increased language skills. Hmm. What do we think of this indicator? This is an indicator of increased language skills. The indicator is whether they pass a test or not. What do you think? You're smiling. You know whether I'm about to say. >> I think it's weak to the extent that it doesn't define what their level. So what is behind this, it was always in the first one, the question of someone getting an A in the pretest, do people go into the program only who get a certain score, you know, so they then, maybe you say everybody who fails goes into the program, so they have to pass. And that would show progress. The other would be everyone who scores lower than B minus goes into the program and their goal is to demonstrate progress. Both would indicate increased language skills. >> MIKE HENDRICKS: This may be a case where I pushed you by requiring two indicators. I pushed you to do more than you needed because perhaps that first indicator was fully sufficient. That would have done it, no problem. Good for you. Who else? Yes. You have one? >> Yeah, but that one I want to say --. >> MIKE HENDRICKS: Go ahead. >> Just to say, think about with that which is more important, increased or passing. Because if the child comes in at say on his GPA or whatever you call it, his grade point average is 24, and he increases to 60, if all you're measuring is he's passed, then you're not measuring the increase, the improvement. So you have to decide as a program what is the most important thing. If you are in a tutoring program where your goal is to assist teachers or classes to become, meet the no child left behind requirement and it's pass-fail, that is one thing. If it's increased, you know, improvement, the passing is not the most important thing at that point. But both are right. I'm just saying you still have to go back and say why did you start the program, who is your, who are your students, and what is your mission. >> MIKE HENDRICKS: I love the essence of what you are saying, which I think is you have to --. >> If you love it, that is exactly what I'm saying. Yes. >> MIKE HENDRICKS: Yes, we agree. Oh, we're good. I think you said, you got to ask yourself what the program is about. Duh, yes, exactly. Somebody down this way. Come on. One of these tables. I will pick a table if no table volunteers. You're running a risk. There we go. Brave soul is reaching. Tell us your outcome first please. >> Our outcome is senior citizens who have transitioned to the community are in control of their own life, more in control of their own life. >> MIKE HENDRICKS: I am really curious to hear your indicator for this. This is a good one. >> Indicators are number and percent of seniors who can specify at least one way they are more in control of their own life 6 months after they transition. >> MIKE HENDRICKS: Okay. What do we think? Feedback? You have something comment please on the mike there. Can you give it. Read it one more time while she is reaching for the mike. This is the indicator. >> Okay, number and percent of services who can specify at least one way they are more in control of their life 6 months after they came in. >> You're kind of doing what the CIL task force is. You're saying it's not just enough to say yeah. They have to actually tell you something, you're saying. Go ahead, please. >> I was just in total agreement. I think it's awesome. I think it's a wonderful indicator of exactly what you want to know. You want to know if they are successful. Are they still living on their own and did they learn anything. >> MIKE HENDRICKS: You like that. >> Yes. >> MIKE HENDRICKS: Other comments, yes. TIM has one. >> Mike, I'm wondering if we can see if the folks at home have any contributions. >> MIKE HENDRICKS: If anyone at home wants to chip in. Well, you're probably at work, not at home. If anyone wants to chip in, feel free do that. Carol will tell us what they are. Good idea, Tim. Thank you. Other comments about this one? Let's stop and talk about it a second. This is something the CIL task force did too. I'm not criticizing you in the slightest. I was talking with Richards petty-wise at the break about there. It's like you could use self report. You can ask people what is happening with them. You could use perceptions by other people, right? Other people see you as more independent or something. You could use some kind of actual behavioral change. These are all kind of hierarchy, aren't they? Hierarchy. Obviously, the easiest way out, and I say we did it a lot with the CIL task force so I'm not criticizing. Easiest way out is self report. You ask people, are you more independent, but tell me a way. Yours was not independence but --. >> Control. >> MIKE HENDRICKS: In control. Same thing. Okay. Let's just all be honest with each other. You know, wouldn't it be nicer, I'm not saying we should, just saying let's be honest with each other, wouldn't it be nicer if we had something a little more hard nosed than that. Not saying it's possible, not saying to do it. I'm just saying let's be honest with ourselves about it. If we had some objective evidence that these people were, in our case, more independent, or in your case more in control. I don't know what that is. I don't know how we get it. But I want to put it on the table so we all are thinking about that issue. My advice is, be as hard nosed as you can manage to be. You know, if there's some way to get some objective evidence that a person is more independent or is more in control, think we all know that that is better than self report. I don't know the solution to it. Let's just keep the concept in mind. Maybe you have the solution. >> Just our second indicator. >> MIKE HENDRICKS: Great. >> The number and percent who can demonstrate an independent living skill they have learned 6 months after they have been transitioned. Honestly when we wrote that I was thinking at my CIL I can't --. >> Sorry, more into the microphone. Really close to you. >> At my CIL I can't imagine having that be an indicator that we have met because I think that would be really difficult. >> MIKE HENDRICKS: Difficult, you say. I'm not saying these are easy or possible. I'm saying let's try, you know. Let's hang that as kind of a think we're aiming for, to move up that level of difficulty scale if we can. One more, who has an indicator they are excited about? Gosh, this is clever, creative. Pat. >> It is not that we. We have than that is really difficult. >> MIKE HENDRICKS: Perfect, let's wrestle with it together. >> All righty then. What was it? >> MIKE HENDRICKS: It's so difficult she has forgotten. >> Seniors in the program. >> Participate in the social activities. >> Participate in social activities. >> Of their choice. >> Of their choice. >> MIKE HENDRICKS: This is the scared outcome. Seniors in the program participate in social activities of their choice. >> We're not sure we did this right. Here is what we know. What we know is that people are socially isolated, are more likely to have health problems and wind up in a nursing home than those who are active and socially supported. Right? So we tossed around lots of different ways to get at that issue of social, isolation, because we know how fundamental it is to stay healthy and on the outside. We just couldn't figure out a way to come up with indicators about what is really, I think, probably fundamental, crucial to people staying on the outside. Can we work on that together? >> MIKE HENDRICKS: Sure. Read the outcome again, and some smart person in here or several will give us some ideas. >> Read it again, Maureen. >> MIKE HENDRICKS: Can you read the outcome again, please. Think of an indicator for this. >> Senior citizens, no, participants in the program will participate in social activities of their choice. >> Might be something along the line of the number and percentage of seniors who have participated in the program who participate in two or more social activities a month. >> MIKE HENDRICKS: Per month. What do you think? >> One of the things, we tossed that around, obviously, trying to look at numbers and amounts that we think indicated change. Just because you participated in the activities doesn't mean you might be less lonely. >> MIKE HENDRICKS: Oh, wait, I didn't hear anything in your outcome about loneliness. >> Well, that is why I think we were struggling. We put the outcome around being lonely and isolated. >> MIKE HENDRICKS: Oh. >> Then we went to the social activity. >> MIKE HENDRICKS: Ah. >> All in. >> MIKE HENDRICKS: What trap are they falling into? >> We don't know. >> MIKE HENDRICKS: They don't know. You're trying to come up with an indicator for a different outcome. >> Exactly. >> MIKE HENDRICKS: That is not going to work. >> Oh. Thank you. >> MIKE HENDRICKS: Remember the Christmas tree ornaments? Remember, I hope they are still on line. Someone wrote in, I thought Mike said you can only hang it on one branch, now he is talking about needing two indicators for a outcome. That is true, nothing wrong with hanging two on a branch. You just can't hang the same one on two branchs. That is the thing you can't do. You're talking about one outcome that says these people participate in social activities. Our friend here has given us a quite reasonable I think indicator of that. But now you're talking about another outcome, the outcome is they are less lonely or something? >> Less lonely, less socially isolated. >> Let me ask the group. Is that the same outcome or different? >> Different outcome. >> I think it's a different outcome, I think. You can go to a social event and sit in the corner and still be as lonely. Or you don't have to go to social efforts. People can come to you and I think you will be less lonely. I think those are two different outcomes, two different ball games. >> Here is a question for the philosophy of independent living that we get caught up into in our office. That is if you say they will participate in a social event of their choice, that is a hundred percent IL philosophy. But if the event of their choice doesn't happen in their community, then you have to be really careful that you're clear to them when you ask them the question. You know, if you say what do you really want and they say I really want to go to the symphony and there's not a symphony within 400 miles, that didn't help in our outcome. So we struggle as people set up in independent living, we want people to choose what they choose to do as long as if it's not available, it's not available is. >> MIKE HENDRICKS: Right. >> In that, if they participate in social activities or two activities out of the hope in which people participate and we define as social, it doesn't say of their choice. Then you are kind of setting yourself up. >> MIKE HENDRICKS: So you have to be careful about that. >> We struggle with it. >> MIKE HENDRICKS: Yeah, yeah. >> So if it's true, and I think we all know that it is, that social, isolation and loneliness are risk factors for hospitalization, nursing home placement, all this stuff, would it be an appropriate thing to do, because this is measuring a negative, I guess, in a way, if the participants, if the percentage and number of participants reporting less loneliness and social isolation after a period of 6 months in the program, would that be an indicator that you're denting the loneliness and isolation? >> MIKE HENDRICKS: You tell me. I think your colleague isn't so sure. >> See why we had so much trouble is this. >> Back to the self reporting. Seems like you were thinking that wasn't the way we should go if we could do something other than self reporting. >> MIKE HENDRICKS: That second part is important. If you can do something else, it's better. There are many many times we can't do something else. Just keep in mind, you know. Keep in mind what you were trying to aim for. We're almost to the closing time. I hope you see the importance of indicators. Right? You may not have seen this morning. You don't measure outcomes, you measure indicators. Boy, by gosh, you better have some good ones or you'll be measuring something not so good. Two house keeping things before we break. One, this has been such a good discussion this afternoon on several topics that we didn't want to rush it. So slight agenda change, especially for people on line. The thing we were going to talk about from 4-4:30, sources and methods, we are going to just pick up first thing in the morning. We can fit it in, don't worry. Not a problem. We will be dealing with that sources and methods first thing tomorrow morning at nine. Second thing, this is on every table. Would somebody find it in should be somewhere in the middle of the table. You got it? I'm not advertising for this corporation, whatever it is. Here is what I'd like you to do though. I with like everyone at the table to take one sheet of this, just within sheet please. Pull off one sheet. Bob and I really believe in feedback that is useful to improve. That is what this whole thing is about. It's true for us too. Hopefully we have done something well today. Certainly not everything. There is something, there is something we didn't cover well enough or there is something you're still confused about, or there is some question you have got, or there is something you want us to either say more about or say it better about or more examples about. There is something you want us to correct that we didn't do quite so well today. Won't you just write it on here. Totally anonymous. Don't put your name on. Get it to us when you leave. Bob and I will look at it tonight and see what we can do for tomorrow. Richard has something to say. >> Also for those of you for the very few who have been able to hang on with us, it's 7:30 on the east coast and 6:30 in the middle of the country. Several of you have hung in and we're so glad that you did. Will you do the same? Send us a tweet, using the question box, give us your feedback. We will certainly consider that. Carol will hang around for a few minutes while you type those in. We are very eager to get those. >> MIKE HENDRICKS: Thank you, Richard. Talk about data and learning, I have not been so good about that. I'll write that down myself. Pay more attention to our online friends here. When ever you have written that down, you're free do whatever. Don't forget the reception at 5:30. Thank you for working hard today. We have thrown a lot of stuff at you. You have worked really hard. We appreciate that. >> What was your name. >> MIKE HENDRICKS: My name? Bob. If you have a complaint, my name is Bob.