View From the Inside: The Promise Neighborhoods Peer Review Process

Is it better to be lucky or good? If you were a Promise Neighborhoods applicant, it may have helped to be both.

From October 12 to 18, we interviewed 10 of the 102 peer reviewers who scored Promise Neighborhoods applications. While we did not ask them about, nor did they volunteer, the identity or details of any of the applications they reviewed, we did ask them to describe the peer review process in general and to offer advice to the U.S. Department of Education and to Promise Neighborhoods applicants.

Overall, they described a process marked by high levels of professionalism and integrity, including a strong commitment to avoiding conflict of interest. However, like any process, there were flaws. Some of these flaws may have affected the outcome of the competition and should be addressed in the future.


The Peer Review Process

Overall, the peer review process for Promise Neighborhoods was like a jury. Each application was judged by a panel of three expert peer reviewers, each of whom read and provided scores before meeting with the others to discuss and share thoughts on strengths and weaknesses.

The process began in early May when the Department posted a public request for peer reviewers. According to the Department, they received approximately 1,000 resumes and chose 102 after assessing their expertise and screening them for conflicts of interest. According to the Department, they sought reviewers with backgrounds “in educational reform and policy; community and youth development; and organizational development and strategy.”  Much of the initial recruitment and screening was performed by a contractor, Synergy Enterprises, Inc.

After the peer reviewers were selected, the Department sent application-related materials in late July. The package included hard copies of 9 or 10 applications. For training purposes, the peer reviewers were asked to sit through one of three 90-minute webinars that described the process, including instructions on how to use the online system for inputting scores and comments, called G5.

Each of the 102 peer reviewers was assigned to one of 34 three-person panels. The Department appears to have balanced the panels by ensuring that they included at least one person with a strong background in education and one with a strong background in community and youth development. Every member of a given panel was given the same 9 or 10 applications to review and score on his or her own using the six selection criteria outlined in the Promise Neighborhoods application package. The reviewers entered their initial comments and scores in the G5 online system.

Shortly after scoring them separately, the peer reviewers began a series of conference calls to review their scores and comments with the other members of their panels. These calls were typically several hours long and were moderated by a Department employee.

During the calls, the moderator focused attention on areas where the panelists varied significantly in their scoring. The moderators attempted to ensure that the overall scores stayed within 20 points of each other. They also apparently attempted to keep individual scores for each of the six criteria within 3 points of each other. The Department facilitators did not suggest that individual scores should be higher or lower. Instead, they asked peer reviewers to explain their scores and discuss them with each other. If a peer reviewer wished to keep a given score, he or she was not pressured to change it.

Several peer reviewers thought the discussion was a critical part of the review process, producing insights that individual reviewers had not thought of on their own. As the process continued and the peer reviewers got a better sense of the range of applications, they often went back and revised their earlier scores for applications they had already rated.

The process ended around August 16. The peer reviewers were asked to destroy the materials they had received. They were paid $100 per application they reviewed.

After the peer review process was over, the applications were given a final score that was an average of the three peer review scores. The applications were ranked by this score. Departmental staff reviewed the top ranked applications to verify their eligibility.


Advice and Comments for the U.S. Department of Education

Overall, the 10 peer reviewers praised the Department for its professionalism and integrity. “Everyone I dealt with was professional and in it for the right reasons,” said one reviewer. “They were very accommodating where they needed to be.”

Nevertheless, there were several areas where the peer reviewers thought there should be improvement in the future:

Some Peer Review Panels May Have Been Tougher than Others: The Department’s peer review process devoted considerable attention to minimizing variation within the panels, asking individual panelists to explain their scores. This process was welcomed as extremely productive by the peer reviewers we interviewed. However, there appears to have been no effort made to minimize variation between the panels.

Of the 10 peer reviewers we interviewed, some seemed to be harder scorers than others and several agreed with this assessment. Some believed that peer reviewers with more experience were more likely to see flaws in applications and were thus tougher in their scoring.

We asked each of the peer reviewers to give the general range in scores for their panels. For several, the highest was in the upper 90s and those applications were chosen as grantees (we did not ask which ones, nor was this information shared). Other panels, however, scored no applications higher than the 80s, thus producing no winners. Several expressed surprise that any applicant scored a perfect 100, as four ultimately did. None of these four was rated by any of the 10 peer reviewers we interviewed.

Some peer reviewers contrasted the seeming precision of the point system with its underlying subjectivity. For example, the “quality of management plan” selection criteria were worth 20 points, with four bullets that were allocated 5 points each. But what constituted a 3, 4 or 5? The peer reviewers were given no guidance by the Department. They could only make comparisons across the 9 or 10 applications they received. They had no way of knowing how other panels were scoring.

“It would have been helpful to have some guiding principles,” said one peer reviewer. “For every section there was a lot of wiggle room. What were we supposed to be looking for? There was lots of room for value judgment.”
Inevitably, those applicants that were unlucky enough to draw peer reviewers that were more conservative in their scoring faced an uphill fight. One called that part of the process “a crapshoot.”

Peer review processes are inherently subjective, so some of this must be expected. Nevertheless, some peer reviewers suggested that this between-panel variation should be addressed in the future by providing more guidance about what constitutes a perfect or near-perfect score. Another suggestion was to add a second round of judging for high scorers so they could be properly compared against each other, not just against lower scoring applications.

One went further and advised the Department to include an interview step in the process for judging applications for implementation funds, similar to the process used for Race to the Top.

The Department Should Have Informed the Peer Reviewers About Page-length Changes: During the grant application period, the Department changed the allowed page lengths for the MOUs and narratives. Peer reviewers were not informed about this change and only one of the ten we interviewed knew about it, having learned it from checking the Department’s web site.

Despite this, most of the peer reviewers we interviewed said that the length of the MOU and narrative did not seem to correlate well with scores. In fact, longer applications often seemed less well written and could be hard to follow. However, when asked if the well-written, shorter applications might have benefited from additional pages, some said that was possible.

The G5 System Was Hard to Use: Several peer reviewers complained about the G5 system, saying it was hard to log in at times and that it often kicked them off the system without saving their work. Many resorted to writing their comments separately in Microsoft Word and then cutting and pasting them into the G5 system when they were able to log in. Several said G5 was the most challenging part of the process.

There Was Too Little Time for Peer Reviews: While the G5 system came in for significant criticism, a close second was the short turnaround time for reviews. Several reviewers complained they were given no more than a few days between receipt of their materials and the first conference call with other members of their panel. Each application required several hours of review time, which was difficult for peer reviewers with demanding or inflexible schedules. One suggested that the time crunch was a potential source of error and that panel members were constantly catching each other’s errors.

The Family and Community Aspects of Promise Neighborhoods Need Improvement: Several peer reviewers with backgrounds in community and youth development felt that this was the weakest aspect of the Department of Education grant and that grantees should not be faulted for the wide variation in response to those portions of the application. Several suggested that this stood in sharp contrast with the strong educational components. They suggested that other federal agencies, particularly the Department of Health and Human Services, need to play a more active role in further developing Promise Neighborhoods.

There Was Too Little Transparency: This last piece of advice is from us. The Department of Education deserves credit for making the names of the peer reviewers public. However, failing to include additional identifying information, such as the organizations where they worked, made identifying them more difficult. We were only able to identify about 30 of the 102 peer reviewers with any reliability.

The Department should be commended for steps toward increased transparency, but they should do more. In one sense, the Department got lucky. During our interviews with Promise Neighborhoods applicants, there was widespread suspicion that politics would play a role in the selection process. The results of the process, however, seemed self-evidently apolitical. Chicago, for example, received no grants, despite an assumption that they would, given that the president and Secretary Duncan are from that city. The results may not be so obviously apolitical next time. If so, an increase in transparency will be critical.


Advice to Applicants

Our 10 peer reviewers had the following advice for Promise Neighborhoods applicants.

Hire a Professional Grant Writer: Several peer reviewers suggested that in a process as competitive as this one, hiring a top notch grant writer was imperative. (Despite that, a few suggested an opposite fear, that good grant writing skills might too easily cover up the flaws of a project that was not as solid as it seemed.)

In addition to good grant writing, one peer reviewer suggested that applicants find someone completely outside the process to review their applications to make sure everything made sense. “Sometimes when you are in something so deep you forget that others don’t understand something as well as you do.”

It’s All About the Points: One consistent piece of advice from several peer reviewers was to pay close attention to how the points are being allocated and answer each question and sub-bullet as clearly and directly as possible. Failing to answer even one sub-bullet in the scoring criteria could easily knock an applicant out of the winner’s circle.

“You have to follow all the directions and include everything they require you to include or you are wasting your time,” said one peer reviewer. “You need to follow every direction. Unless you get a near perfect score, you aren’t going to be funded. I can’t say that enough.”

“Be clear. Just answer the questions. Sometimes it is just that simple. An astounding number of people skip one or two criteria. You have to give them a zero.”

It Helped to Have Already Done Much of the Work: When we interviewed Promise Neighborhoods applicants in July, several thought that the Department was looking for applicants that had already done much of the work and had solid staff and management infrastructure already in place. Our peer reviewers seemed to confirm this view.

“Winners were those that had already been moving in that direction and documented it in their proposal. Doing a whole school reform effort is a lot of work. Some of the applicants had already done whole-school reform efforts and already retained some of those elements,” said one. Applicants should “leverage the work that had already been happening in that school and in that community.”

Focus on Demonstrated Local Collaboration and Accountability: A signed MOU was not enough to show shared commitment to the work. Evidence of strong and ongoing relationships was needed. This was particularly true of relationships with local schools. Including the results framework and expected results in MOUs showed shared commitment.

“Letters from state representatives and prestigious individuals and organizations didn’t make much difference,” said one peer reviewer. “When you saw letters from neighborhood people, churches, etc. — the people who were doing the work and were totally engaged — that made a difference. Not the letters of prestige.”

One peer reviewer who had visited the Harlem Children’s Zone focused on the need for demonstrated accountability, not just between organizations, but within them, right down to the level of staff. “That’s a radical thing,” she said.

“Everyone gives lip service to that, but it is not really a common practice in the every day world of implementation of services. With education and teacher unions it is a controversial item. In the application, people used ‘data-driven’ and accountability jargon, but it was not well developed.”

Winners Have a Duty to Give Back: According to one reviewer, because they are getting federal funding, “the winners have a duty to the other applicants that did not get the funding and a duty to stay connected to the others in their own community and nationally. Share that information publicly, lessons learned, successes, trials and tribulations. You have a duty to share that.”

This entry was posted in Collective Impact. Bookmark the permalink.