5.21.2010

The Birth of an Assessment

Sam recently blogged looking for feedback on an Algebra 2 assessment he gave, but mainly to start a conversation about assessment creation, etc.  My classroom assessments have changed drastically in my short 2 years of experience.

Here's a chronology of my growth:

1st year teaching (Algebra 1 - full year course)
  • Classroom instruction followed the sequence (and pacing to some extent) of the textbook we were using (McDougall Littel Algebra 1).
  • Assessments were largely based on Chapter Tests from the end of whatever chapter we were in, re-typed/formatted but using the same problems with little thought to balance what was actually being tested.
  • Points were assigned to each problem to award partial credit for being on the right track (this often ended up meaning the more difficult problems were worth more points than the basic problems - more steps = more points)
  • Here's an example of one such test: Chapter 9 Test.  Point distribution is as follows (side note: what was I thinking with these point distributions...they make no sense!)
    • #1 - #4 :  38 points total (one for coefficient and one for variable in each term of each problem)
    • #5 - #6:  6 points each (coefficient part, variable part for each term)
    • #7 - #8:  4 points each (2 for multiplying correctly, 2 for combining like terms correctly)
    • #9 - #10:  10 points total (coefficient part, variable part for each term)
    • #11 - #12:  9 points total (1 pt each - GCF number, x, y*for #12*, each term left inside parentheses)
    • #13 - #16:  4 points each
    • #17 - 20:  6 points each (4 for factoring, 2 for solving) *#19 was eliminated as unfactorable...oops*
    • Total out of 100 points (this was a rare occurrance...my tests rarely end up being "nice" numbers of points)
Looking back, getting ready for year two, I realized that the tests I gave in year one were nothing more than a random assortment of problems that may or may not have given me an idea into what level of understanding my students had.  Mostly, it was a hoop they (and I) had to jump through at the end of a chapter that didn't tell either one of us anything relevant.  If a student failed a test, I would work with them 1:1 to try and remediate some areas, but it was haphazard at best.  If the whole class failed, then I would plan a retest.

2nd year of teaching (I'll stick to year-long Algebra 1 course for discussion purposes, despite having other classes as well)

- Fall Semester
  • Classroom instruction divided into more logical "units" instead of staying strict to the chapters in the text.
    • Unit 1: Number Sense and Properties
    • Unit 2: Variables, Exponents and Substitution
    • Unit 3: Solving Equations and Inequalities
    • Unit 4: Proportional Reasoning
    • Unit 5:  Graphing
    • Unit 6: Writing equations
    • Unit 7: Systems
    • Unit 8: Polynomials
    • Unit 9: Factoring
    • Unit 10: Quadratics
    • Unit 11: Statistics
  • Assessments were created with more thought given to the types of questions and what they were really asking/showing to give balance
  • Points were assigned for each problem for partial credit opportunities, but with consistency and balance in mind. (i.e., being careful not to weigh a really difficult problem too heavily or allow one specific skill to dominate the point distribution)
  • Here's an example of a unit test:  Unit 2 Exam. Point distribution is as follows:
    • Simplify: 20 points total
      • first row - 1 pt each (basic exponent rules)
      • other problems in this section - 3 pts each
    • Evaluate: 2 pts each (1 for substituting, 1 for simplifying)
    • Let a = 3...: 3 pts each (2 for substituting, 1 for simplifying)
    • Order of operations: 3 pts each
    • Total out of 62 points

- Spring Semester

At the end of 1st semester, my colleague and I started talking much more about assessment.  A lot had changed compared to my first year, but we still weren't satisfied with the information we were getting from students' test grades, nor were the students doing anything to study or improve their skills.  We sat down during our "Snowpocalypse Week" and hashed out a standards based grading system.

Based on the needs of our students, we decided that assessments now would have two forms:
  • Individual skills assessments
    • focus is on progress towards mastery of one learning target
    • points are assigned with a 1 - 5 rating for each skill.
      • 5 = A, mastery level: true mastery of high school level content
      • 4 = B, mastery of the basic level content
      • 3 = C, general understanding of basic level content, some mistakes
      • 2 = D, several mistakes or holes in understanding
      • 1 = F, many mistakes or solutions off point
    • Here's an example of a skills assessment used this term. (basic level, mastery level) The page has 4 skills, but each receive an individual grade
  • "Retention Tests"
    • focus is on whether or not a student is able to recognize solution strategies when there is more than one problem type being addressed 
    • is the student is retaining the information from previous skills targeted?
    • points are assigned in a similar manner to the fall semester unit tests
  • Instruction is still in "units" and retention tests happen at the end of units. 
  • Skills assessments happen on a more ongoing basis: teach, practice, skills assessment, reteach if necessary, re-assess as needed, etc.
In general, I like having at least one question on a retention test and/or on the mastery level of a skills assessment that isn't exactly like any other problem they have seen.  These types of questions help students synthesize what they have learned and help me see if/how they can apply the strategies in new contexts instead of just regurgitating a procedure.

This year has certainly been one of growth! Some fruit of our labor in the field of standards based grading? Nine of twelve students in Algebra 1 passed the state End of Course Standards of Learning assessment on the first try.  The other three qualified for an expedited retake.  Two of them passed.  This means we have a 90% pass rate for first time testers in Algebra 1 this year (9 out of 10...2 students that passed this year were re-taking the class)! Last year it was a mere 37.5% (3 out of 8).  While I am aware that passing the state test is not a true picture of whether or not the students can *do* Algebra, I also know that they were more prepared for the test due to targeted remediation of their weak skills as shown on the skills assessments.  That's an achievement.

2 comments:

  1. I am hoping to implement a standards based grading system next year that seems very similar to the one you have described. I have a few question about the examples of skills assessments you provided. One is labeled "basic" and the other "mastery". How is it determined what level sill assessment the student will get at any one time? Do they all start at with the basic level and keep taking them until they pass the mastery one?
    Thank you so much for sharing! I'm in this one alone at my school so I truly appreciate the advice and input.

    ReplyDelete
  2. Christina, thanks for reading and commenting!

    To answer your questions: all students start with the basic level. A perfect on the basic level is a 4/5 (B). I tend to give a second "basic level" test before giving the mastery test, to assure that the student has the basic level solidly. After scoring a 4 on the basic level, they can attempt the mastery level if they want. The way it's been working this year is I keep track of which tests they have taken and tell them when they can take the mastery. In an ideal world, the students would have more control over that process/decision making, too. Not all students get to the mastery tests, because it requires success on the basic test first. Students can retake the basic tests whenever they want (becomes difficult if you don't have more than one or two forms), and the grades only go up in the grade book.

    Feel free to contact me if you have more questions that pop up as you start!

    ReplyDelete