Friday, November 16, 2012

Taking Stock of Unbridled Learning Results

The Unbridled Learning accountability results have been out for a few days now, and we are seeing lots of articles, board presentations, parent workshops and discussion about the accountability results.

Early reports seem to focus on the overall drop in proficiency (which was predicted) and the new emphasis by the state to provide a percentile rank for schools and districts. However, there has not been much discussion about the significant increase in the percentage of graduates who are college- and career-ready. This is somewhat disappointing, since college and career readiness is the underlying principle for the accountability model and was the key requirement from 2009’s Senate Bill 1.

Other key issues we are hearing about include the usefulness of the tools provided. While there are massive amounts of data in the new School Report Card, schools are reacting very positively to the data being in one place and the user-friendly nature of the School Report Card. The report card gives a quick and easy snapshot of performance of schools and districts and also provides a multilevel, complex view of the components that make up the overall score for schools and districts. 

The percentile rank system has been well received by most, since it provides an easy way to understand how your school/district performance compares to other Kentucky schools. This percentile system is similar to what parents receive from testing reports. Parents may not understand the test score from the state or national test; however, they do understand and want to know how their child's performance compares to other children across the state and nation.

The release of the accountability model has also given the Kentucky Department of Education (KDE) an opportunity to receive constructive feedback on concerns with the model. Among these concerns are:
  • complexity of the system
  • science and social studies scores -- too high, compared to math and reading
  • comparisons with national assessments
  • understanding student growth
  • understanding student gap group results
  • perceived lack of consequences for low-performing schools

KDE will share these concerns and others as we present the Unbridled Learning accountability results to the Kentucky Board of Education (KBE) at the December board meeting. Most of these concerns can be addressed by clarification of the model and how the results are reported.

There will be those that call for immediate action to address concerns. I want to close with some of the state and national issues that will certainly impact any immediate or long-range changes to the model.

The Kentucky Board of Education has certainly stated a clear intent to improve the accountability model as we get feedback from the field. The first issue we must consider is that schools and districts entered the 2012-13 school year knowing the "rules of the game" for accountability, and we should not change the rules in the middle of the game. Therefore, I would recommend to KBE that no major changes be made to regulations governing the model until we have at least two years of data from the model. Also, we are governed by the federal Elementary and Secondary Education Act (ESEA) waiver, and any changes to our accountability model would require federal review. Finally, all states are hoping for  reauthorization of ESEA (No Child Left Behind), which most certainly will impact the Unbridled Learning model.

As we close out November, parents across Kentucky now know if their child is on target to be college- and career-ready. From 3rd grade through 12th grade, every student and parent has the information to know the status of a trajectory to reach college/career readiness by graduation. This information provides students, parents and educators with the information needed to take action to ensure more of our students reach college/career readiness and have a positive impact on the economy of Kentucky.

4 comments:

  1. Why are we using percentile to measure growth when according to the NWEA it is not a good method for measuring growth in students? http://www.nwea.org/support/article/1205

    Also, when examining growth points, what is purpose of peer groups when it could be just as easy to compare student A score on test #1 to student A score on test #2?

    Peer groups don't take into account cultural, social, economic, etc. differences in students, it is just a score on a test. It also doesn't account for various factors that make schools different; i.e., spending per student, 1:1, ESL and FRL population, etc.

    These are just some thoughts as I have examined the growth data.

    Thank you for your time.

    ReplyDelete
  2. Thanks for your excellent questions.

    Some people wonder why Kentucky and 25 other states don't just use two test scores to determine growth of a student. The complicated answer deals with the idea of vertically scaling across years with a summative test. Vertical scaling purports to have a scale on tests from year to year that clearly shows an improvement in scores. In essence, if student A scores a 125 one year and a 135 the next year, that student went up 10 points.

    The problem with this idea is that vertical scaling is extremely difficult if not impossible to do on a once-a-year summative test that has limits in testing time and items. There also are theoretical testing issues. For example, at 3rd grade, we might be measuring addition (2+2). How does this problem get harder in 4th grade or 5th grade or 6th grade? It is very difficult to move concepts to a higher grade level requirement. So we end up repeating the 2+2 in 4th, 5th, 6th and higher grades.

    Of course, a student by 4th or 5th grade would have 2+2 down, but would that really show intellectual growth? We could just keep adding items in lower grades to the upper grade tests, but the length of the test increases significantly. And in many cases, concepts just aren't taught again from the lower grades, so teaching would have to change, too.

    Formative tests, like MAP, allow for multiple testing across the year and use literally hundreds of items. Item and time are not issues. With numerous items in the bank, MAP is able to create a way to show growth across the year. The problem lies in the once-a-year summative test used by Kentucky.

    Psychometricians (test design experts) warn against trying to have a vertical scale on a once-a-year summative test. In fact, Kentucky's technical advisory panel (NTAPPA) advised against creating a vertical scale. As Kentucky explored growth models, the Student Growth Percentile (SGP) model used in about 25 states was appealing for its once-a-year summative test criteria. It provides a different way to look at growth by comparing students to their academic peer groups in the state. SGP isn't necessarily right or wrong; it's just a different way of creating a growth model.

    ReplyDelete
  3. Mr. Commissioner,

    With respect, while waiting a year to make most changes to Unbridled Learning may be acceptable, problems concerning serious minority achievement gaps cannot wait another year.

    The "Three Sigma" standard deviation test for truly low performing schools isn't working for African-American students (and I never suggested it as suitable for such use).

    Schools have received the very top Unbridled Learning classification of "Distinguished, School of Distinction," and face no sanctions despite enormous white versus black math achievement gaps.

    That's just not right, and it will undermine the credibility of Unbridled Learning if not aggressively addressed.

    I'll have details next week at the Bluegrass Policy Blog.

    ReplyDelete