What’s an SGP? An introduction for Utah educators

One of the best recent developments in public education is the ability to measure, consistently and on a large scale, student growth. In Utah, all students who take the SAGE summative assessments get scored on their content and skill mastery. More than that, a growth score is also calculated for each student. These growth scores, combined with proficiency scores, give us the most useful indication yet of how students are progressing year-to-year across the state.

What are the advantages of using SGPs?

  • The focus is not on categorizing or labeling, but on tracking growth
  • It is a meaningful measure for all students, from least to most advanced
  • It is straightforward and comparable across all classrooms

What are the disadvantages?

  • Not a diagnostic measure, i.e. there is no indication as to the “why” behind student growth.
  • There’s still a ceiling for the highest-performing students

Questions for thinking through your students’ SGPs

  • How do growth metrics compare to what you already know about students?
  • Which students are not proficient, but are showing growth?
  • Which students are proficient, but not showing growth?
  • How does growth compare across subject areas for each student? Is growth in Math, ELA, and Science similar or different?
  • Are any student groups (by demographic or classroom) showing patterns or trends?

This overview is brief, and there is much more potential use for SGPs in the classroom. How good is SGP as metric of school quality? That I can’t say.  Making causal claims to categorize classrooms, teachers or schools is really not the point of an SGP. The best use for this metric is to monitor our own instruction as teachers and our own leadership as administrators, and see where we can take steps forward.  Data like this is always meant to start conversation, not bypass it.

Further information on SGPs available from

Legislative Letter: Support continued investment in SAGE and SGP

Utah policymakers and stakeholders should support for the SAGE (Student Assessment of Growth and Excellence) and the resulting SGP (student growth percentile) for the following reasons:

  • Student Growth Percentile is the best metric we have in Utah public education. An SGP, and resulting MGP (median growth percentile) is a simple and informative metric. It helps teachers monitor the effectiveness of their own instruction, helps districts monitor effectiveness of schools, teachers, and programs, and helps make comparisons across the state. (Student Growth Percentiles 101 from RAND http://www.rand.org/education/projects/measuring-teacher-effectiveness/student-growth-percentiles.html)
  • We have avoided the pitfalls of PARCC and value-added models. Instead of a nationalized test such as PARCC from Pearson, Utah uses an adaptive assessment built to Utah standards. That allows Utah to work closely with American Institutes for Research to continue tailoring the assessment, which is ideal.
  • Effective implementation takes time and patience. The SGP and the adaptive version of SAGE have not been around long enough to be consistently integrated into public education culture and decision-making. Education administrators need time to integrate this data into our information systems, educate teachers and stakeholders, and develop effective analysis and reporting protocols.
Ideally, we should use SAGE and student growth metrics through an entire 3rd-12th grade cohort (from 2013-2014 to 2021-2022). Consistent data over a long time is necessary for optimizing our public education system and such longitudinal data would be a first.
There are additional ways to support SAGE and SGP beyond the issue of funding.
  • Avoid inconsistent or contradictory policies. I am especially concerned with any legislative attempts to exempt certain public schools or classrooms from evaluation models. We should monitor student growth across the board.
  • Be cautious in attaching incentives to this assessment and metric. Educators have internal incentive to help students succeed, and rewarding or punishing based on metrics may undermine that. SGP is an informative metric, but not a decisive one.
  • Find out the assessment scores and growth metrics for the schools in your area (USOE’s Data Gateway can help). Compare percent proficient on SAGE to the Median Growth Percentile. It’s especially helpful to view the schools in four quadrants:
    • High growth, high proficiency. We’d all like to be here.
    • High growth, low proficiency. The students started and ended with lower proficiency scores, but showed above-average improvement. In these schools, low proficiency scores may be masking high growth.
    • Low growth, high proficiency. The students started and ended with high proficiency scores, but showed below-average growth. In these schools, high proficiency scores may be masking low growth.
    • Low growth, low proficiency. No one wants to be in this quadrant, but this result should spark thoughtful conversation and decision-marking about how to improve.
We have a successful assessment and metric. The SAGE test and SGP metric are meeting our initial expectations. Let’s stick with them. Utah public education has made strides in increasing graduation rates, decreasing achievement gaps, improving early literacy, etc. We have the methods and data to do more given time.
Legislative Letter 1: SB 204 and grading policy
Legislative Letters: be sure to communicate with policymakers about what you know.

Looking at a new assessment? Ask these questions first.

Questions to ask before adopting a computer-based test system
Assessments, like all things, are moving online. While this saves paper and creates possibilities for data and reporting, you are changing one set of problems for another. Before saying yes, make sure your vendor can work with you to answer the following questions.
Strategic
  • How well does this test fit with our district strategy?
  • How well will this help us realize our strategy?
  • How well will this fit with our existing IT infrastructure?
Accounts management
  • How will users be added and deleted?
  • Is it possible to extract user data from the SIS to set up accounts automatically?
  • What is the unique identifier for students within the system?
  • What is the unique identifier for teachers within the system?
  • How will the district manage user accounts?
  • How will teachers manage student accounts? Will they be able to print, access, and reset student passwords?
  • How hard is it to switch students to a different teacher the day of the test?
  • Can teacher/administrative users be associated with multiple schools within the assessment system?
  • What is the process for managing accommodations? Who can enter and manage students’ test accommodations? How many users per school will be able to do this?
IT Infrastructure
  • Is a secure browser available?
  • If no secure browser is available, which browser versions are supported?
  • When and how are regular system updates scheduled?
  • How are users notified of system downtime?
  • How do students begin a test session? Will they be able to switch computers mid-test?
  • For a recording component, how will teachers be trained to troubleshoot microphones?
  • Will schools need to purchase more computers to accommodate this test?
Interface
  • Does the teacher interface display the same way in all major browsers?
  • How will students be accommodated? Is there a built-in screen reader? Adjustable contrast and font size? Volume control?
  • If there is an audio component, will students be able to adjust volume during the test?
  • If there is a recording component, is there a scrubber so students can replay and re-record?
  • What tools will students be able to use within the interface? Highlighting? Flagging? Note-taking? Reference sheets? Calculators?
Data
  • What is the objective for this data?
  • Get we get large-scale data extracts or dumps? In what formats?
  • How will teachers access test results? District administrators? Students? Parents?
  • How soon the test results be ready?
  • Once the test results are ready, what is the plan or strategy for reporting on them or making use of them?
  • What user roles are available and what access to data does each role have?
Implementation and Training
  • How will employees be trained on this system? How many training opportunities before a live session?
  • What opportunities will students have to interact with this system before a live testing session?
  • How accessible is the Test Administration Manual to teachers?
  • Are user tutorials available within the system?
Cost
  • What is the cost per student? Per administration window?
  • What is the cost of related equipment: headphones, microphones, etc?
  • What is the cost of training and professional development?