Monday, June 25, 2007

Week Seven--General Education

I'm just going to reflect on Gina and Kelly's seminar questions here for GMIT 660 like I did in GMIT 650:

Kelly's first question/s:

The article on “Assessing General Education Core Objectives” was based on the curriculum at Southeast Missouri State University being assessed for 3 core objectives: 1) the ability to locate and gather information; 2) the ability to think, reason, and analyze critically; and 3) the ability to communicate effectively. The Assessment Committee evaluated samples from freshman and senior seminars and upon completion were able to itemize major findings (page 5) concluded from the analysis.

1) Although all of these findings are important, which two should be given top priority and why?

2) Does SCC face some of these main issues and if so, which ones and why?
Remember, you can answer either question.


Kathy Zabel's main reply to Kelly:

One of the most critical findings is part of the first bullet in Major Findings and that is the statement, “Some artifacts produced in the freshman seminar were evaluated as stronger performances than some pieces produced in the senior capstone courses”. Students ready to graduate should by all means be producing superior work to the lower level students. It implies that those seniors didn’t “learn how to learn”, didn’t learn from their experiences, or didn’t have the experiences in earlier courses. Either way, it is a detriment to those students getting ready to graduate.

The second top priority is the lack of critical thinking observed. That is the talk everywhere, that students can think critically and solve problems. They must be able to think critically in real life to survive on their jobs. That’s not just instructor talk, but that’s what the supervisors in the physician offices are saying to us. “Your students MUST be able to think critically”.

My main reply to Kelly:

1) Although all of these findings are important, which two should be given top priority and why?

Kelly, I would say that "number one" would be the fact that "all who were involved with the assessment project agreed that opportunities are needed for dialogue among faculty to discuss possible program modifications based on the results of the assessment project."

Agreeing is the easy part; the hard part is to follow through on this agreement! At least two or three faculty should be the committee to plan agendas, and schedule times and places for this to happen!

"Number two" would be, for me, the fact that there was such a wide range in performance in the areas of formulating a thesis, producing an edited writing sample, citing source materials accurately, locating relevant source material, evaluating others' and constructing their own arguments, and producing polished pieces of writing.

Some students could barely get started, while some performed masterfully. That is not acceptable as an instructor! That gap has to be closed significantly!

2) Does SCC face some of these main issues and if so, which ones and why?

From what I have seen, yes, SCC faces these same problems/issues! I don't think there is a lot of agreement or discussion that takes place in regard to program modifications based on assessment. I don't know if there even has been any cross-disciplinary faculty group assessments done. Have there been any that you know of?

In regard to the wide range in performance, I have seen it first hand and it is very disappointing. I've seen it in database research training and in the classroom. How do we work to close that gap? Will it take more time working with students one-on-one? Would more milestones working toward better performance help? Do we need classes designed specifically for this?

My conclusion and reflection:

Kathy told Kelly that her top concern was that the rookies were outscoring the veterans! That is not too surprising to me. Some students are way ahead of their classmates, and sometimes ahead of people two or three years their senior because they are just extra-gifted students. It would be more alarming if it was across the board.

I think it is much more important to try to close the gap on very poor performances versus outstanding performances amongst the students. To me there is no good reason for this. For students to be at a level of higher education they have to bring more to the table than just getting by. I kind of take it personal as an instructor. If I work hard at being a good instructor I want all students to give a good effort. I want to find out why they don't. I ask them why they seem to me to be struggling so much. Is it time constraints? Is it that something is bothering them personally? I want to find that out!

Kelly's second question/s:

The article “Imposed Upon: Using Authentic Assessment of Critical Thinking at a Community College” dealt with the issue of the Board of Trustees of the State University of New York changing the core requirements. This impacted all the transferring graduates of the local community colleges and resulted in major academic changes. Then came the question of assessment to ensure students were obtaining a secure knowledge base and valued skills such as information management and critical thinking. A plan was devised by three instructors who taught on different campuses to each write an essay at a different performance level. These essays and an instruction sheet were given to students for them to assess in a written report an argument as to how they would rank the paper. A rubric was developed to assess the students’ essays and overall observations were made in regards to their critical thinking abilities.

1) Did this method fulfill the requirement of assessing critical thinking? Why or why not?

2) Is one method enough or should multiple tools be used? Suggestions?

You can answer either question.

Doug Brtek's main reply to Kelly:

First of all, I am satisfied that the method was sufficient to assess critical thinking in this case. However, it wouldn't hurt to explore alternative ways to assess critical thinking such as verbal or written expression, reflection, observation, experience and reasoning. I know assessment is never the proper place to explore new ideas, but they should be considered at different stages throughout program assessment. After all, shouldn't we be assessing our assessment methods?

My main reply to Kelly:

Kelly, according to the article, it was sufficient to fulfill SUNY's mandated assessment requirements. I think it was a great way to assess, as well as teach, because showing the wrong, almost right, and right way to do an assignment is a tremendous tool in getting students to learn, let alone to think and write critically. It helps remove some of the gray areas and misunderstandings associated with learning.

It's kind of like teaching a youngster not to play with fire, or a boiling pot of water; sometimes you have to let them get burned (just a tiny bit without hurting them, mind you) to get them to know what NOT to do, as well as what to do, and how to do it. The tricky part is letting them get burned without hurting them.

Showing students the right and wrong way to do an assignment (by using the three different levels of quality essays), as well as getting some assessment practice at the same time was a great idea the authors came up with. It leaves the students with a more clear mental picture of the assignment, that hopefully they would retain.

As far as other ways to assess critical thinking:

One might take the essay/assessment thing a step further and have a panel or roundtable discussion amongst the students to critique each other's essays (hopefully without too much tension) to help each other learn more through sharing thoughts or ideas. The main focus would be to discuss how each could have done better to think more critically about the essays.

My conclusion and reflection:

Doug agreed with the assessment in general, but he suggested verbal and written expression, reflection, observation, experience, and reasoning as viable alternatives to consider. I do like Doug's suggestions, but I feel they wanted to keep it more simple and standardized to start. If they were to do this on an ongoing basis, these would be good guidelines.

Kelly's third question/s:

In the article “Community College Strategies – Assessing the Achievement of General Education Objectives: Five Years of Assessment,” Oakton Community College did a locally developed assessment of general education objectives. From 1999 through 2002, the approach was to use “prompts” for assessment; in 2003, the approach was changed to evaluating actual classroom work. The article continues with comparing the 5 years of assessment and stating observations about the whole entire process and results.

In the article “Community College Strategies – Assessing General Education Using a Standardized Test: Challenges and Successful Solutions,” College of DuPage took an entirely different approach than Oakton Community College. Instead of using “prompts,” they developed a strategy in which 6 American College Test/Collegiate Assessment of Academic Proficiency (ACT/CAAP) area tests were given to a select number of introductory courses (beginning students) and advanced courses (graduating students). The article continues with the results of their assessment method.

1) Of these two methods of assessment, was one method superior over the other in obtaining assessment data that can be used constructively? Explain your reasoning.

2) Pick one of these methods and explain the advantages and disadvantages of using it as a tool for assessment.

Remember, answer the question that appeals to you the most!

Jessie La Cross's main reply:

I keep going back and forth on this question. On one hand, I was not comfortable with the prompting methods Oakton CC used.

On the other hand, the purpose of the assessments are to measure how well students can meet the general education objectives, some of which were listed as:

1) define problems
2) construct hypotheses
3) interpret data using a variety of sources
4) explain how information fits within a historical context
5) communicate findings effectively in writing and in speech
6) work and communicate effectively with people
7) apply ethical principles to issues

I don't know how some of these, especially the highlighted ones, could adequately be measured with a standardized test. I liked the approach used in 2003, where actual classroom work in real time was used to evaluate the speech and teamwork objectives. Even though not as many students were able to be assessed this way, with practice, this seems like it could be a good strategy to use.

Each tool--standardized objective tests and performance based tests--has different strengths and weaknesses, but when used together I think they can actually complement each other.

My main reply:

Kelly, I think both articles display good examples of assessment. What I like most about the "Five Years of Assessment" example is the fact that they tried different formats over a significant time period. That allows for a wider range of experimentation which should translate to finding a good assessment vehicle depending on student background and coursework implemented.

Regarding the "Standardized Tests" article: the thing I liked best about their format is that they used response sheets that faculty used to voice what should be done with the results of the assessment data, and then made public (anonymously) in a document. Now, if they used that document to further improve coursework, they made great strides in improving learning and assessment.

My conclusion and reflection:

Jessie gave some very good reasons--pro and con--on standardized versus performance based testes. Her feelings, in the end, was that using the two together somehow would be a good way to go about testing.

My reply was that I liked assessment with variety over a period of years to get a better picture, and a more realistic view. Try different things--get a more extensive sampling of assessment so it is more reliable.

Feedback both ways is a huge factor here, also! Making it public for peer reviewing has to do a lot of good, also. When others can see what your results are it makes for everyone trying harder to do better and work towards more improved learning!

My final reflection on the seminar:

I think Kelly and Gina did a great job again! Kelly is so good at keeping discussions going with replying thoughtfully to replies and asking for more input! I feel that I did a good job of adding to the seminar with my thoughts and feelings on the different provided questions. It was another great learning experience! See you next week in GMIT 660!