Final Formative Assessment Design 3.0 The Path to True Education

In my first formative assessment I came up with a unique idea for an end of unit assessment, which I believe is unique, individualized and highly inquiry based.  I have taken a shift in my planning for my formative assessment away from a specialized assessment for a specific unit, and rather towards a whole new way of doing formative assessments for ANY and ALL units, topics, classes, subjects, etc.  This has been discussed many times, and what is thought of as revolutionary by some is viewed as unnecessary, inefficient or overall useless by others.  My hope is that this idea and design is customizable enough, detailed enough, and of a quality that would make it beneficial to all.

So, what is my new idea?  In the 21st century why do we still assess student’s learning with a single standardized assessment on paper that allows for no differentiation, no true measure of knowledge, and no true progression monitoring, but rather different snap shots of how students do on one specific assessment on one given day under one set of circumstances. As stated by Mayrath, Clarke-Midura and Schraw (2012).


“Our society continues to perseverate on assessing 20th century reading, writing, mathematics, science, and academic knowledge, typically with multiple choice and other psychometrically validated tests that can be efficiently administered in one or a few hours. The landscape of skills and knowledge being tested does not stretch to the 21st century. The testing format does not sufficiently tap the functional cognitive procedures and social practices of today. Many groups are trying desperately to correct this misalignment, both inside and outside of the assessment industries. But the process is slow and laborious, with politics complicating everything.” 

A better way to assess student knowledge would be to utilize the digital tools we have, individualize the assessment, and measure to a high degree of detail and monitor them over time to see their evolution.  This would flip how education is truly done, changing it from a measure of specific learning objectives, to a true and deep evaluation of a student’s knowledge on a subject because what is being measured and assessed is the full student and their full range of knowledge and comprehension, removing the rigidity and creating fluidity.  In this circumstance all assessments of knowledge are formative because there is no defined learning “unit” or “time period” because at all times students are simply focusing on growth and development. 


Additionally, this would allow for varying “tracks” depending on a students desired outcome that would have a desired end “profile” and measure a student’s progress on that path.  For example, if a student desires to go to a specific university program that university could set a desired “learner profile” and the student could follow that learning path until achieving the outcome that would show proficiency for the university to consider them.  This could be done with different universities, trade schools, careers, etc. and would allow for individualized and tailored educational experiences for different people.  This is utilizing Understanding by Design as you are going with the desired outcome, and creating the learning path and therefore learning experience specifically to reach that specific outcome.

This new assessment would completely shift education because no longer are schools, teachers, students, etc. tied to an evaluation based on an outdated and inaccurate form of assessment.  This would allow education to simply focus on education, I know this is a novel idea!

So, lets dive in to the specifics of this assessment.  The ideal assessment I am thinking about is an interactive digital assessment, that is beyond an assessment and more of a learner profile.  This profile stays with a learner throughout their entire education journey and measures every subject they study, giving a complete holistic measurement of strengths, weaknesses and anything else on the learner.  In addition to simple measurements of learning objectives it would also give us feedback on their critical thinking skills, how they respond differently to different questions, and varying circumstances.  When a student is given a question the tool would assess their response for accuracy, as well as the time spent on the question, what other selections were made before the final submission, previous answers that could be related to the answer provided, and come up with both a score of accuracy, but also a score of certainty.  Unlike a traditional assessment, this would then provide a follow up question based on the specific circumstances and learner, either providing a simpler question, more complicated question, or question that is the same.

Some additional features that could be present in this assessment tool is the ability to add keyword tags to various questions which would allow for deeper assessment of learning and knowledge, the ability to add varying measures of knowledge ranging from basic understand questions, all the way to complex evaluate, create, or analyze questions, and the ability to measure for varying degrees of certainty. Questions would range from multiple choice, true/false, short answer, application to situations, interactive simulations, group interaction questions, etc.  This high level of assessment, and constant monitor of what a student knows, how well, and what secondary abilities they have would allow for complete feedback for the learner and educator to model learning specifically for this student, which means follow-up learning is not only present but highly differentiated, individualized, and precise for what the learner needs. Keyword tags would exist for content as well as the varying levels of Bloom’s Taxonomy, which would allow for an assessment of student knowledge as well as critical thinking ability and varying levels of application.  Additionally, you could monitor how students are able to handle different types of questions like their difference between multiple choice and short answer questions to provide instruction that is specified towards their needs and allow for improved reflection.

At the end of this assessment a complete breakdown of how the student did, and where their current learner profile is would allow for the best form of education tailored personally for optimal learning.  In addition to tailored feedback that would tell students their detailed learning profile it also would provide direct links to instructional materials to educate the students on their areas of weakness.  Content areas of weakness would be addressed by linking to video instructions, textbook pages, worked examples,etc that address this content, and the presentation of material would be done in a way to support the additional learning needs, like an increased inclusion of critical thinking skills or situations that require remembering terms rather than seeing them presented.


To look at this in greater detail I am going to pick a specific topic to use as an example.  I will look at what I am currently doing at my work, Mathematics for secondary students.  My plan is to start to develop an assessment tool for these students and to expand it to a larger scale.

Pre-assessment instruction will simply be to teach learners to truly master their own personal learning.  Whatever personalized path to knowledge they are on, would be carried out in high detail with personalized instruction for the students.  No teaching for a test is necessary because the test is measuring true knowledge and understanding on many different levels.  The test is rather designed for the student.  A teacher can control every single detail of the assessment from the length, detail, type of question, measurement goal or outcome, and desired path or final outcome for a class or learner (which allows to measure progress towards that outcome).

The assessment instructions would be straight forward, because learners would learn how to interact with the system.  They first create a profile, then measure their baseline knowledge, where the longer they measure the more accurate the profile becomes.  For the start of each topic a learner would answer questions created to establish a basic foundation of where they are, and then they would measure that in comparison to where the end goal is for them, and what path they should go to reach it.  For this example students come in to the classroom where they each have a computer they can use.  They are taken through creating an account, and taken to the “start” point.  Once they do this they are told to just answer the questions they are provided. They then spend a full class period answering the questions on the computer as it evaluates their baseline knowledge.  The “start point” would initially be based on the level assigned by their teacher which would be a preestablished norm for their age, and grade as well as subject specific. For this specific example Year 7 mathematics.

All classroom designed instruction and learning would be directly tailored to each individual student’s level and path.  For this to happen the teacher would need to have pre-designed materials for any situation that could arise. This is fairly simple, albeit time consuming, by looking at all potential learning path learning objectives and preparing a cloud database of content easily accessible for varying levels.  This would allow for content to be easily accessed and the educator would be used as a subject matter expert, a guide, and able to work individually rather than providing content to all students simultaneously.

Teachers would then give instruction in one of many ways on all future assessments depending on the type of assessment being conducted or whether it was being used as a learning experience.   This could range from giving a predicted time length, detail of assessment, etc.  If the CMS was being utilized as a learning module the students would rather be answering practice questions and when they are unsure of answers using the in-system video, text, and assistance to learn the material and scaffold their knowledge.

Some standard instructions that need to be given are to inform students that the assessment will be different for all, and pacing will be different for every student.  Outside resources would not be necessary as any needed materials would be directly accessible within the CMS, for example if a question required a calculator it would be generated, and if notes or collaboration was to be used the CMS would have the platform capability of showing previously taken notes (similar to Google Documents) or a group interaction platform. Instructions for what is to be done at the completion of an assessment must also be given.

Post assessment feedback will be provided both automatically from the tool, and then also from the educator.  The automatic feedback from the tool would be a breakdown of how they did on the assessment, under the premise of what their current learner profile is, where this is in relation to the desired outcome, areas of strength and weakness, and the projected best path going forward. Additionally for each question, content area, learner skill (critical thinking ability, learning shortcomings, etc.) the system would provide materials for study/review.  For the current use I will be utilizing Khan Academy, supplemented with MyMaths and EdExcel digital documents as these are the closest to the desired CMS system as well as most closely aligned to our needed curriculum (GCSE).  Examples of this would be if a student shows weakness with algebraic fractions they would be given links to textbook material, worked examples and video explanations of how to solve this.   The educator would then also discuss what these results mean for the student, and discuss any issues that they may have had, as well as planning a path forward. One key component of the feedback would be the student creating a planner that includes goal setting, self-evaluation, progress journal, and teacher to student communication logs. This would be done through the google suite, via a created Google doc that includes already created documents that both the student, students’ parent, and teacher can access and utilize to communicate.  This is a great technology to use as it allows collaboration and real-time updating to increase the efficacy of communication and work.  This feedback set up is optimized based on the principles from Hattie & Timperley (2007) as well as Nicol & Macfarlane-Dick (2006) in that feedback is essential, and must be direct, specific, informative, and action oriented.  A specifically designed and easily utilized feedback system optimizes learning.

Currently the closest digital tool to mirror this platform is the Khan Academy new “Mission” and “Mastery Goal” platform.  This platform allows students to answer questions that are given based on their previously measured knowledge and track their growth towards mastery on a predefined set of skills and knowledges.  It has shortcomings in its analysis of critical thinking skills, as well as the AI technology is not at the level as I desire and described, but from a base level it is working well at adaptive questioning and providing external resources for learning when students are uncertain in areas.  It also allows students to continue working outside of school area, allows a high level of student control while still giving me great feedback and data about each student and the class as a whole. Additionally, I am creating Google documents as explained above to create personalized trackers and communication logs for me and my students.  Lastly, I have started creating a cloud database with all of the necessary resources to eventually have this become a student centered, student led course.

References


Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. [PDF]

Mayrath, M. C., Clarke-Midura, J., Robinson, D. H., & Schraw, G. (2012). Technology-based assessments for 21st century skills: Theoretical and practical implications for modern research. Charlotte, NC: Information Age Publishing.

Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s