Friday, May 1, 2015

I Just Need to Take English and Math Placement tests? Really?




Why? Why ? Why are so many educational institutions living in the past? 

When reviewing some research for an article I'm writing, I was a little taken back. The article, on self-regulated learning (if you must know), gave a brief overview of ideas about learning that influenced Western education. Among these, Thurstone (1938),  provided what was thought to be a perfect description of the abilities of students (Primary Mental Abilities Test).

While these ideas contributed significantly at the time, and have relevance today, we've learned a few more things since.

In spite of these new understandings, these testing practices persist today. The idea is that the right test can classify and place students in just the right level for optimal instruction, for example the right math groups, or the right English class. This placement practice consistently yields poor results overall, yet this practice is still widely used as a primary placement tool today from preschool to college.

While I am not arguing to throw out testing, tests do provide some good information, what I do know, ...no more than this.... What we know from research today, is that self-regulation has far more to do with successful academic outcomes than performance on a placement test. Testing along is not enough.

We need more than the right tests for correct placement.

Monday, February 23, 2015

To Conform or Not to Conform?


Conformity is all around us. While it sounds negative to label someone a "conformist", saying that the person is a "team player" is appealing. Isn't this the same thing framed in a more palatable way? We celebrate historical figures for non-conformity, especially those political figures that proved to be on the right side of things in retrospect, but these non-conformists often faced opposition and strife. This is true of Lincoln, Rosa Parks, Martin Luther King Jr., ... to name a few.

Conformity is a reality of the world in which we live. Social scientists describe stages of development of the self that surround the notion of how others perceive us (e.g., Looking Glass Self).

We, as human beings, are highly influenced by those around us. A classic psychology experiment by Solomon Asch (1956) with American college students over 50 years ago demonstrated how much we are influenced to conform. Its surprising! He conducted a social experiment in which he asked a very simple question to a group of students. One of the students was set up. All the other students were told to pick the same wrong answer. Asch found that when presented with a very easy problem, one that had an answer that a child could figure out, if everyone gives the same false answer, 75% of students gave the same incorrect answer that the class chose 75% of the time. The questions were asked more than once and when all answers were added up an average of 35% of all responses (the guy who didn't know) conformed to the wrong answer.

So this means there are situations when in groups that we can feel pressure to knowingly choose the wrong answer. This experiment and others like it have been repeated many times. In fact this study was repeated with participants that were monitored with fMRI technology. What was found?

Basically, it is a painful experience to go against the group's position (Burns and colleagues, 2005). The activity in the area of the brain that shows pain and discomfort (e.g., amygdala) was very busy for the students that went against the the group's wrong answer. The brain image of the guy that went with the group opinion wasn't nearly as disturbed.

What could we do?

I suggest  know what you think and believe in advance or the group can easily influence you.


IME 2015 Conference Keck School of Medine at USC Oral Presenation,



The following is the longer version of the paper submitted to IME 2015 (conference syllabus).  Clicker here to view the oral presentation (googledocs) or here for a youtube version and clicker here to view the Q&A.

The Effect of Audience Response Systems on Metacognition in Graduate Students: A Two-Year Mixed Methods Study   

Melanie Brady, Ed.D., University of Southern California, Rossier School of Education
Jane Rosenthal, Ed.D., Assistant Dean, School of Applied Life Sciences, Keck Graduate Institute
Christopher P Forest, MSHS, DFAAPA, PA-C University of Southern California, Keck School of Medicine, Division of Physician Assistant Studies

Problem Statement
Use of educational technology to engage learners continues to grow at a rapid pace. Studies of effectiveness of clicker use find that when clickers are utilized with research-based instructional strategies the learning experience in large lectures is enhanced2. In a study with undergraduates (n=198) metacognitive self-regulation seems to improve when clickers were utilized in this manner. Comparison (low technology) and experimental (clickers) methods each demonstrated significance influence on learner metacognition, clickers with the summer cohort and the comparison method with the fall. However, when performance outcomes and qualitative data were factored in, clickers demonstrated a high degree of significance (p> .01). This current mixed methods study of audience response systems and metacognition investigates whether the experience for graduate health science candidates (e.g., 1st year Physician Assistant candidates 2013 and 2014) is consistent with the undergraduate experience and to what degree between graduate cohorts (2013-2014). 

Rationale
The importance of these investigations lies in the growing body of research that self-regulated and metacognitively aware learners tend to have improved outcomes and that metacognition and self-regulation are teachable. Research suggests when clickers are utilized with instructional strategies (e.g., questioning and peer instruction), performance outcomes increase and metacognition may be affected. Metacognition, the regulation of cognition and self-knowledge, is an essential component in the learning process in order to become a self-regulated learner. This mixed methods comparative study examines the extent to which high-tech devices (clickers) and low-tech devices (paddles) affect learner metacognition. This study extends our 2013 mixed methods examination of clickers and metacognition conducted with 1st year Physician Assistant candidates and further comparison between clickers and paddles.,1,2  If the data generated by the two years is not robust enough, a third cohort is proposed for fall 2015 to increase the strength of results and potential for generalization. 


Hypothesis
The response device that more effectively influenced metacognition would be  associated with higher performance outcomes. Based on the results of the undergraduate study we predicted that use of clickers would lead to less social comparison which could enable more productive learning; use of paddles would lead to more social comparisons that could interfere with the learning process.


Methods 
Data were collected from 54 graduate candidates in 2013 and 51graduate students in 2014 during a behavioral sciences course.  Clickers were used during weeks 1-5 of the course and a low technology response system (paddles) during weeks 8-12. Paddles are handmade signs are held up to indicate preferred answers (A-E); this method was selected for comparison as an analogous system to clickers in that it provides a quick visual check of student responses that allows participants to be polled once as opposed to raising hands several times for a multiple choice question. This comparative, mixed-methods study employs several measurement instruments and a pre- and post-test design to compare the two response systems. The components of metacognition of interest in this study are Metacognitive Judgments and Monitoring and Metacognitive control and self-regulation. 

Quantitative instrument. In the first week of the course, pre-test data and demographic information were collected. Questions from the Motivated Strategies for Learning Questionnaire (MSLQ)3 served as the pre-post-test instrument. Two instruments that measure feedback systems and metacognition1 were administered at week 5 (experimental/clickers) and at week 10 (comparison/paddles) Metacognition in Lecture Survey2 (2013-clickers, α = .910; 2013-paddles α = .935; 2014-clickers α =  .806; 2014-paddles α =  .888) measures metacognitive self-regulation experienced by learners in lecture through changes in learning behavior inside or outside of lecture and Metacognition Attribution to Response Device Scale (2013-clickers, α = .723; 2013-paddles α = .704; 2014-clickers α =  .681; 2014-paddles α =  .827) measures the level of metacognitive influence that learners believe to experience as a direct result of the use of the polling method. Mean quiz scores from the first 5-week session served as the measure of performance outcomes for clicker use, and the mean participation scores for weeks 6-10, for the comparison treatment (paddles). 

Qualitative instrumentation. Participants completed an on-line qualitative survey using Qualtrics© that consisted of open-ended questions to elicit reflections about response device use.   Interviews were conducted using purposeful sampling using the following criteria: 1) low mean scores indicating little metacognitive influence attributed to clicker/paddle use; 2) mean scores in the median range indicating a moderate-to-neutral influence; and 3) high mean scores indicating a strong influence. 

Results
High comfort level and prior use of audience response systems were reported by 60% of participants from the 2013 cohort and 100% of participants from the fall 2014 cohort on the initial survey.  Two tailed t-test for dependent means were conducted to examine between groups differences in metacognitive self-regulation, the pre-post-post –MSLQ), and performance outcomes (e.g., in lecture clicker quizzes. Significance was not found between the Graduate Health Science 2013 and 2014 cohorts on the pre-post-post-test administration, but significance was found with metacognition instrumentation. Lack of significance between pre-post-post-test between groups indicates group similarities which increases the potential strength of results and ability to generalize. The first post-test administration followed use of clickers, the second post-MSLQ following the comparison method (low-technology). This indicates that learners in both cohorts gauged individual metacognitive self-regulation similarly at the start of the course, following the treatment method (clickers) and following the comparison (low-technology polling). 


Differences were found on formative performance assessments between the 2013 cohort (M = 87.38, SD = 5.86 and the 2014 cohort: (M=75.41, SD=6.27) demonstrating differences in metacognition during lecture of the two groups (t(52) = 10.263, p = .001). Significance was demonstrated between groups for both instruments measuring influence of metacognition during lecture, and the attribution of metacognition to the response device (t(52) = 4.84, p = .001; t(48) = 5.83, p = .001). Qualitative analysis results were similar between groups. Clickers were perceived as a more effective way to monitor learning and the low technology method resulted in conformity and reduced pressure to prepare for lectures. Differences occurred in that a small portion of the 2014 cohort suggested that the low technology system created opportunities for discussion and learning and was enjoyable while the majority of peers in the same cohort did not share this opinion, nor did the 2013 cohort. Reports of positive learning experiences with paddle use tended to accompany indications of relief at the ability to rely on group lieu of individual preparation when schedules were busy

Lessons Learned
Quantitative results indicate that clickers influence learner metacognition more so than low technology response devices, suggesting that conceptual understanding may be clarified through use of clicker items and interactive teaching strategies (e.g., questioning, and peer instruction) leading to improved formative feedback for enhanced learning. Both cohorts indicated that clickers strongly influenced peer comparisons, consistently positively influencing the learning process. Clickers can improve accuracy of metacognitive judgments and influence strategies utilized for learning outside of lecture. Qualitative results suggest that graduate learners are more confident in strategies utilized for note-taking in lecture and for preparation for lecture. Several learners reported changing answers based on peer responses, and feeling less pressure to prepare for lecture when the low technology system was utilized. Focus of clickers on independent learning improved ability to monitor learning and results indicate that learners are more apt to prepare for lecture with individual response component. 

Selected References
1. Brady M, Rosenthal J, Forest C. Metacognition and Performance Outcomes: An Examination of the Influence of Clickers with 1st Year Graduate; in progress.
2. Brady ML, Seli H, Rosenthal J. Metacognition and the influence of polling
systems: How do clickers compare with low technology systems? Educ Technol Res Dev. 2013;61:885-902.
3. Pintrich, PR, Smith DAR, Garcia T, McKeachie WJ. Reliability and predictive validity of the motivated strategies for learning questionnarie (MSLQ). Educ Psychol Meas. 1993;53(3):801-813.

Sunday, September 28, 2014

A thought for today about web-based learning environments and learning



I was thinking about principles for good web-design learning and self-regulation....

Structured web design has the potential to build in self-regulated learning. Utilizing features of polling and interactive quizzes in real-time courses or self-guided formats can provide specific timely feedback for student in the learning process. Research regarding feedback indicates that specific timely feedback improves the learning experience. The learner can reflect on the level of understanding based on the learning task (e.g, preparation for text, lecturette, video, activity). Reflection and self-monitoring are functions of self-regulation and metacognition.  

While self-regulation and metacognition can be taught the degree to which the on-line or web-based learning formats actually improve learner self-regulation is uncertain. This may be a function of individual variability (also a construct of metacognition). The web-based learning formats can require self-regulatory-like behaviors as a by-product of the structure of the course, activity or program. Actual individual learner self-regulation is not necessarily improved per say. Research does seem to consistently indicate that the self-regulated learner, whether in a brick and mortar context or virtual learning or web-based learning environments, has better outcomes. A logical approach to achieve certain outcomes seems to consistently point to principles of good learning strategies and instructional design. The format is simply the vehicle.   
 
(Comments are loosely based on discussion from an article in review -Brady, Rosenthal, & Forest, 2015).

Tuesday, July 29, 2014

A call for more educational counselors and improvements to pre-service training and professional development


Educational counseling is poorly funded in California and receives little attention to the important role that this job plays in the lives of students.

This is an area of employment in the school systems that seems to be thought of as dispensable, and easily replaced by "guidance technicians". Recent evidence to this effect lies in the short-lived increase in categorical funds, money that among other things pays for educational counseling in California schools (K-12, community colleges, and 4 year colleges). 

While the recommended ratio of students to counselor is 1 to 200, and the national average is above 1-400, the ratio in the California school system is 1 to 1000.  This is especially true of schools serving low socioeconomic areas, further compounding the many impediments already faced by populations in these communities with regard to access to education.

The attention to educational counseling at the national level this week is overdue. President and Mrs. Obama gathered university presidents to discuss the importance of this role in lives of disadvantage students.  Important issues discussed include pre-service training and professional development.

Saturday, July 26, 2014

Accountability & outcomes: What does Bill Gate's address to higher ed mean at the institutional level?



In a recent address to the higher education community, Bill Gates points to research that is wise to heed. Measurable outcomes of success, from the perspective of businesses and students, need to drive the future of higher education. For businesses, the question is, “Are students graduating with what they need to know?”, and for students the question is, “Will I get a job, one with a future?” Bill Gates reinforces the growing need for accountability in higher education with an emphasis on program completion that directly translates to employability.

Gates stresses the importance of accessible, web-based programs that streamline the process of learning. This requires a degree of accountability that, essentially, serves students and the business community by providing a product, a deliverable. Programs and courses that are able to provide this type of education are the ones predicted by Bill Gates to remain as the “…small number of top-quality online courses in key disciplines [that will] replace home-grown lectures on many campuses”.

What does this mean to the traditional collegiate context? 

First, what not to do:  Resist the popularized temptation to quickly design a new webpage stating how this matter is address or how this matter will be addressed. Also, abstain from touting the ways this is already done through current programs and courses. Clearly it’s not.

What to do:  Do gather data quickly on program outcomes pertaining to successful employment. Do pull data from job market searches, beyond a cursory level, that demonstrates a contribution to the workforce, locally, nationally, or internationally; this is dependent upon institutional factors. Do eliminate or give an extreme makeover to programs that are not contributing to this level of student success (only for programs with some real potential); decisions like these require “teeth” and occur at the administration level. As much as administrators in many instances bend to involve faculty in decisions to garner support and maintain productive communication channels, let’s face it, faculty committees are unable to infringe upon community members by identifying programs as failing, or unnecessary, or as in need of restructuring. The level of buy-in will only occur at the institutional level, presented to the entire academic community, with the notion that it’s time to change or be left behind.

Do provide an effective, quality learning experience. On the ground level, institutions that align learning programs, courses, and student learning outcomes, and do so with attention to research-based learning strategies, instructional design, effective use of technology in learning, and accountability, will accomplish this task. Courses that are developed from effective, well-developed instructional design models will be the type recommended by businesses and used by students. The research is clear enough; the tools need to be implemented and undergo an ongoing process of evaluation to maintain service level and to grow and change as needed. The financial safety net of the past…that of aligning the institution with requirements of accreditation organizations to maintain student loan funds may no longer be enough. Academic institutions placed on warning in recent years have discovered that running from student learning outcomes will eventually lead to loss.

It makes no sense from an administration perspective, or from a faculty perspective (regardless of some of the feet-dragging on accountability issues), to resist student learning outcomes and accountability.

The emphasis on education that leads to employment has every indication of continuing as a long term trend.

I say...serve the students and the money will come!