- Get Involved
What Does Success Look Like At P2PU?
In the non-profit world, the measures of success are different to those that exist in the for-profit world. And a non-profit that it totally successful should, in theory, be able to work itself out a job. But in reality, non-profits can’t fix the entire world, so we have to find other ways of knowing of we’re doing well.
On the call last week, we spent a lot of time going over what a process for establishing success might look like, figuring out a few useful indicators of success, and then tried applying these indicators to things we have done in the past. Once we we’re all comfortable with these (and this is where your opinion comes in – tell us what you think) it will be music easier to have a rubric for figuring out what success will look like for future projects.
Who Are We Doing This For? And Why Does it Even Matter?
Knowing what success looks like it important for several stakeholders at P2PU:
* Us. Knowing what is successful is useful because:
- It helps us know when to drop something or say “no” to a project or task
- We need to know how we are doing
- We need to know what we’re doing.
- Sometimes, in the past, when we’ve done something that succeeded, it felt like a surprise. And while this is, in some ways, the nature of innovation and experimentation, it is also important to know when we’ve done something well.
* Our Funders & Partners
- Knowing when we’ve been successful is important for our funders even though some funders have different requirements how to articulate goals, and how to track and report (e.g.: Hewlett – http://pad.p2pu.org/p/hewlett-grant-evaluation)
- Future project and research partners need to know how well we’re doing at our work. For example, the OER research hub @Open University want to measure the impact of OER – so we’re going to have to start measuring something.
* Current and Future Learners
Finding the Right Metrics
It can be easier to think about success in terms of indicators. High-level indicators are essentially aggregates of measurements that give you a quick indication of whether or not the organization is on target with its goals and will allow us to map all projects and initiatives to the objectives — bearing in mind that most projects don’t fit nicely into a single goal or objective, but it’s helpful to know how each project is contributing to the overall goals of the organisation. MIT Media Lab uses three indicators when they measure success:
- Uniqueness – have we done something new that helps the field?
- Impact – Have we reached people?
- Magic – Did we create epiphanies and enable serendipity?
These can be mapped to the values and goals of P2PU by posing the question “What Do We Do at P2PU?”
Answer: At P2PU we:
- Create unique learning experiences from which we build and share new knowledge (learning/uniqueness/punk);
- Facilitate peer learning all over the world (impact);
- Enable learners to create epiphanies and experience serendipity (magic)
Oh, Really? Prove it!
Once the indicators have been mapped to the goals, we can derive specific objectives that P2PU may have within each indicator by asking What? and How? questions. For example:
*How does P2PU facilitate peer learning all over the world?
*What does P2PU *do* to facilitate peer learning?
Once the What and the How are established, it is useful to move on to the “prove it” stage, which is where the metrics and measuring comes in.
For example, if one of the ways P2PU facilitates peer learning all over the world is through development of its Lernanta platform, what sorts of things can we measure about Lernanta? # of code releases, # of usability tests, # of contributors to the code base, etc.
Note: All of these measures are proxies for measuring the immeasurable i.e.: facilitating peer learning. And they can be refined over time.
How to Handle the Data
When you start measuring, begin with the data you have, and see how that can be used to help tell your story.
Once you know what data you have, figure out what extra data you’ll need; from there it’s an iterative process to get better data that’s more closely aligned with the goals and objectives of the organisation
Mapping Indicators to Values
The values that drive all activity and decisions at P2PU are Openness, Peer Learning and Community. And while we don’t strictly define these in our public statements, it is useful to think of some areas of definition for them, because it makes it easier to map them to indicators and metrics.
- everyone can participate
- content is “open”
- model and tech are “open”
- processes are “open”
- possible metrics (hard to measure):
- reuse of “open” materials
- growth of community contributions
- references to our materials
- projects that spin off outside platform (eg. MIT MediaLab) ( this is also kind of a measure of innovation)
- governance involves everyone
- volunteer participation all over the place
- cordial (civil, tolerant, respectful, etc.) group of individuals
- strive for quality as a group, helping each other along the way
- possible metrics: list activity/participation, participation in weekly calls, # of courses being run, # of active platform users, # of newsletter click-throughs
- peers teach and learn
- everyone contributes something, everyone walks away with something
- individuals take ownership of their learning and support each other
- possible metrics: # of participants and retention, # of courses created, # of courses being actively run
Questions to ask as you go along:
*Do projects that “feel” like success, check out as success according to the metrics?
*Do interesting stories arise from the things we do?
How Did We Do? Applying indicators to past projects
Hewlett assessment work
- Very unique, when we started, nobody was talking about badges yet, and while people understood that evaluating deeper learning (and deeper learning type skills) was important, nobody was doing it. And an assessment framework still doesn’t exist. *Eds note: nothing can be very unique. It is either unique or not. Pedantic rant over*
- How well/ how much is our framework translated into actual assessment/feedback? – Brainspace
- How many learners have we reached with it? – Direct
- How many badges? What type of badges? What is the quality of the badges (ie number of badges that are user-generated vs. P2PU certified / awarded to others
- This feels harder
- “When assessment is feedback, there might be a higher chance for magic?”
- Someone getting a job with a badge would qualify as magic (also impact, but pretty cool impact)
- Highly unique it was a critique/response to the large-scale MOOCs out there
- Super unique: only used OER, leveraged existing resources and services, cost almost nothing to build … runs perpetually
- Not a ton of magic here–we didn’t facilitate a lot of peer interactions
- There was a bit of magic in the openstudy community
- More than 5000 signed up for the first round
- Not sure where this fits – Magic maybe? – Part of of our goal with the groups is to maximize “engagement” or how frequently they discuss
School of Webcraft
- Lots of people completed to some level
- Mozilla still refer people to the challenges
- Not totally unique – other places offered similar things
- BUT the peer-learning aspect was unique, and the freeness too, and the fun/quirkiness (at the time)
- Mozilla certified badges for this? That’s definitely a win. Also part of uniqueness.
Lernanta (course platform)
- More than 2000 daily visits
- 500000+ unique visitors since launch
- Being an completely open with open content where anyone can create
- For lernanta we shun magic – magic is that stuff that happens in the Django ORM – we don’t like too much of that
School of Open
- not many projects attempting to encompass how open applies across the spectrum.. more niche projects focused on a domain
- In progress
- Everything is aimed to run openly at the same time… still in progress
Possible metrics for School of Open
• sign-ups vs
• active participants
• vs participants retained (# of participants remaining at end of courses)
• # of courses actively run vs courses actively participated in vs challenges existing
• kinds of “open” topics
• # of participants at each week who complete the assignments, join calls, etc.
• # of badges earned (and according to badge) once badges are installed (this won’t happen for first iteration)
• qualitative reviews and interviews with course participants, where they reflect on the impact they perceive it has had, initiatives they then influence, etc. – eg. power of open book on CC
• # of volunteers (individuals and organizations)
• other impact measures: press mentions, countries participants are in, social media mentions
• community list activity, community list sign-ups
▪ long-term benchmarks
• incorporation of courses or “open” education into institutions and schools
• outcomes of specific courses, eg. open policy course resulting in open policies implemented at X number of orgs/inst’s/gov’ts
• (vague, but worth adding) more people involved in “open” generally?
- Hack Education Weekly News: MOOC Expansions, MOOC Refusals, and Felonious High School Science | smartermag.info on Happy Release Day! We are OBI Compliant, Folks.
- Hack Education Weekly News: MOOC Expansions, MOOC Refusals, and Felonious High School Science | H Tanalepy on Happy Release Day! We are OBI Compliant, Folks.
- mozzadrella on How We Make the Glue (Without Getting Sticky)
Tagsbadges batucada Berlin Workshop brainstorming competition courses Creative Commons design DML drumbeat featured course gang hendrix incorporation lernanta March 2010 marvin media Mozilla ocw oer open courses opencourseware opened2010 open education openness orrick p2pu Projects rap School of Open skills map stand up sugarhill gang technology twitter uoc update virtual sprint web webcraft week weekly update wild workshop
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.