Student supervision and feedback

Why strong labs sometimes submit weak papers (Josh Schimel)

I’m sure almost all of us have had to deal with manuscripts where we knew it would be much easier to take the data and just write the paper ourselves, rather than try to coax a student’s work into a polished form. But doing that would undermine them; they need to learn how to write good papers, how to manage the process, and how to gauge when a paper is ready to submit.

In Writing Science, I pointed out that “doing science is inherently an act of both confidence and humility” and that getting the balance between them “is one of the greatest challenges all developing scientists face.” Learning that balance involves both over-shoots and under-shoots. For a student to become a fully fledged professional and peer (as they should), they need to establish ability and confidence, and to develop an independent identity. They need room to grow and to become a peer.

I’ve sometimes found myself in the situation described above, where the student work I’vwe been going through has seemed to have so many issues that it would be easier for me to simply rewrite the work than to try and guide the student towards deeper understanding. I have to fight the urge to correct what I see as problems, and give feedback that aims to help the student see the problems themselves. Until they learn how to recognise the issues and make appropriate changes, we’re not helping them to develop as academics and colleagues.

As a supervisor, one of the ways that I’ve found to avoid this has been to switch off the “Track changes” feature. While it gives me a sense of satisfaction to see how much I’ve “helped”, I know that the student will probably just “Accept all changes”. This means that my attempt to show them a “better” way has missed the point because they won’t be paying attention to it anyway. I was alerted to this fact when the due date for a Masters proposal was rapidly approaching and the student was beginning to stress about their impending submission. They were concerned when I suggested that they begin final preparations, and responded that it wasn’t ready yet, because I, the supervisor, had not yet corrected all of the grammar and spelling. That’s when I knew that I was the problem.

Now when I give feedback I try to ensure that my comments are in the form of questions that highlight what I think are gaps in the students thinking or writing processes. I try to give suggestions for actions that the student can take in order to addresse these gaps, and sometimes offer links to resources that they can use. The point is that it is the responsibility of the student to take action, based on the feedback, in order to improve the work.

If basic research were conducted under the conditions of educational research

Council of the rats, from Wikimedia Commons
Council of the rats, from Wikimedia Commons.

This post was originally published at /usr/space.

Basic, bench research study – you are testing the mechanism of an airborne viral infection on lung function:

  • You have a line of carefully bred rats, all genetically identical.
  • You keep them under controlled conditions of temperature, food, exposure to the environment and isolation from other rats.
  • You expose them to the virus under conditions to ensure they get identical levels of exposure to the pathogen – viral concentrations and durations of exposure.
  • If desired, you expose them to the virus multiple times at specified intervals.
  • After an appropriate interval, you sacrifice the rats to examine the lung tissue for evidence of the effect of the virus.

If you take the same kind of study and try to implement it under the conditions of most educational research, you have something like the following:

  • Your rats come from everywhere: white rats, sewer rats, pet rats, roof rats, Norwegian rats, and even a few mice. In fact, the rats are INTENTIONALLY selected to be diverse, rather than uniform.
  • You have no control over where the rats live, what they eat, what they do, what other rats they consort with, or what activities they pursue.
  • You expose them to the airborne virus in a large room when all the rats are gathered together by releasing the aerosol at the front of the room and letting if diffuse through the rest of the room. During this exposure interval, some rats come in late, some leave early, some are sleeping and thereby breathe in less of the virus, while others are active and breathe in more. Of course, some of the rats aren’t even there.
  • If you want to have multiple exposures, some of the rats from the first exposure will now be absent, whereas other rats will be there for the first time.
  • After exposure, many rats intentionally try to share the virus with their fellow rats.
  • At the same time, dozens of other researchers are using the same rats for their own studies, exposing them to various agents, running them through various mazes, observing their behaviours and feeding them all manner of diets.
  • Instead of holding them in controlled conditions while the virus establishes itself, you have to release them back into the wild, where they roam freely, engaging in all sorts of unexpected activities and exposing themselves to all sorts of other viruses.
  • When it comes time to perform the autopsies to examine the effects of the virus, you first have to catch as many of the rats as possible. Some evade capture and other that you trap don’t look familiar to you and you question whether they are really part of the study.
  • Then, you find that the ethics board denies you the opportunity to sacrifice the rats. Instead, you must develop tests to infer the effects of the virus or questionnaires to ask the rats “how they feel”.

Larry D. Gruppen (PhD), University of Michigan Medical School

Writing a research proposal for T&L

Here are my notes from a presentation by Prof. Denise Wood on developing a research proposal for projects looking at T&L.

Image from Tony Duckle's Flickr photostreamUnderstanding the funding body is important when it comes to applying for funding. Disciplinary specific proposals may not be successful when it comes to T&L projects.

Local evidence of successful projects is important before applying for larger grants. Collaborative teamwork is a great way to build ideas and test concepts. Local resources help you get started and build a track record. Generating pilot data helps to begin publishing. When panels review research proposals, your previous experience in obtaining funding and successful proposals is highly emphasised.

Why are you undertaking the study? Knowing your goals will justify your design decisions. What are your goals:

  • Personal
  • Practical
  • Intellectual / theoretical

Writing proposals is closely tied to career trajectory

How are you using research and research projects to improve your teaching practice?

What conceptual framework are you using:

  • Research paradigm
  • Experiential knowledge
  • Existing theory and research
  • Pilot and exploratory studies

Interested in addressing a gap, bringing in personal reflections that guide and influence the research. If you only think of your conceptual framework as a literature review, then you limit the scope of your research to what others have done.

Research questions:

  • What is the relationship between the goals and the conceptual framework?
  • Help to guide the actual research design / methods
  • Used to connect the problem and practical concerns
  • Should be specific and focused on the study
  • Need to allow flexibility to reveal unanticipated phenomena (if the questions are too focused you may miss emergent ideas)
  • Need to avoid inherent assumptions as they bias the study

Find a balance in the number of questions (3-4 is usually adequate)

Begin with divergent thinking to allow yourself space to explore many possibilities. Mind mapping is useful to identify high-level ideas. Begin reading broadly and then begin narrowing the focus. You can’t answer all possible questions in one study.

Try to avoid getting too caught up in the details of the research methods. Only use methods that you understand.

Note that you will be informed by your own epistemological understanding of what knowledge is and how we come to know. Your methods (quantitative, qualitative or mixed methods)  will most likely mirror your understanding of how we come to know. This will in turn guide how you sample, gather and analyse data.

For local studies, it’s OK to use a pilot within a classroom. Use this to identify a single context. Larger proposals would be better to expand the scope of the study and test the outcome of the pilot. On the basis of the smaller studies, you can make an argument for the larger study. Think laterally about how you can collect data.

  • Validity: How might you be wrong?
  • Bias (what assumptions do you bring with you? Results and interpretation distorted by your own values and preconceptions)
  • Reactivity (quantitative researcher may try to control for the effect of the researcher influence; qualitative researcher looks at how they actually influence the outcomes)

How do you reduce bias and reactivity?

  • Studies should be intensive and long-term (not the same as longitudinal study)
  • Gather rich, thick data (less likely to get from surveys / questionnaires; rather use interviews or focus groups)
  • Respondent validation of outcomes (is what you heard the same as what they meant?)
  • Identifying discrepant cases or evidence (you should take outliers into account, but identify and reflect on them, not necessarily include in the main data and suggest reasons for the discrepancy)
  • Triangulation
  • Comparative data (look at different contexts and populations)

Proposal checklist:

  • Identify a funding body
  • Objectives of the funding body
  • Use the guidelines that the funding body provides
  • Previous funded research and see what has been accepted and / or rejected
  • Links with existing research that the body is involved with
  • Evidence of value, need and benefits (institutional, local, national, international)
  • Background / conceptual framework
  • Methodology
  • Evaluation strategies are valued in educational research
  • Engaged dissemination whereby you share your results as you go, using a variety of methods, including publications, conference presentations, social media and workshops
  • Budget: must meet funding body requirements, realistic, value for money, justify costs
  • Milestones: linked to objectives and outcomes
  • Researcher capabilities: ensure you can deliver what you say you can, track record, previous collaboration, strategic, roles and responsibilities, realistic within workload

Try to model your proposal on successful projects. Learn from the mistakes of others. Sit down with a colleague and ask for constructive feedback.

Explicitly make reference to important and contextually relevant policy documents.

Identify how your research is going to create systemic change.

How are you going to evaluate your process and outcomes?

  • Formative: should be ongoing and used to modify project
  • Summative: can be broad and can go beyond the stated outcomes

Design-based research: can use milestones that are linked to formative evaluation. Identify problems early on and adapt quickly.

How are you going to convince the funding body that the people you’re collaborating with are adding value to the project? You must justify the presence of every team member and highlight how they will contribute.

How are you plugging the holes that funding assessors are going to be looking for?

Differentiate between deliverables (the tangible products that will come from the project) and outcomes (the achievement of stated aims and objectives).

This was originally posted at /usr/space.

Simon Barrie presentation on Graduate Attributes

“Curriculum renewal to achieve graduate learning outcomes: The challenge of assessment”
Prof Simon Barrie, Director of T&L, University of Sydney

Last week I had the opportunity to attend a presentation on graduate attributes and curriculum renewal by Prof Simon Barrie. The major point I took away from it was that we need to be thinking about how to change teaching and assessment practices to make sure that we’re graduating the kinds of students we say we want to. Here are the notes I took.

campuslife

Assessment is often a challenge when it comes to curriculum renewal. The things that are important (e.g. critical thinking) are hard to measure. Which is why we often don’t even try.

Curriculum is more powerful than simply looking at T&L, although bringing in T&L is an essential aspect of curriculum development. Is curriculum renewal just “busy bureaucracy”? It may begin with noble aims but it can degenerate into managerial traps. Curriculum renewal and graduate attributes (GA) should be seen as part of a transformative opportunity.

GA are complex “things” and need to be engaged with in complex ways

GA should be focused on checking that higher education is fulfilling it’s social role. UNESCO World Declaration on Higher Education: “higher education has given ample proof of it’s viability over the centuries and of its ability to change and induce change and progress in society”.

GA should be a starting point for a conversation about higher education. If they exist simply as a list of outcomes, then they haven’t achieved their purpose.

How is an institution’s mission embodied in the learning experiences of students and teaching experiences of teachers?

What is the “good” of university?

  • Personal benefit – work and living a rich and rewarding life
  • Public benefit – economy and prosperity, social good
  • The mix of intended “goods” can influence our descriptions of the sorts of graduates that universities should be producing and how they should be taught and assessed. But, the process of higher education is a “good” in itself. The act of learning can itself be a social good e.g. when students engage in collaborative projects that benefit the community.

Universities need to teach people how to think and to question the world we live in.

If you only talk to people like you about GA, you develop a very narrow perspective about what they are. Speaking to more varied people, you are exposed to multiple set of perspectives, which makes curriculum renewal much more powerful. We bring our own assumptions to the conversation. Don’t trust your assumptions. Engage with different stakeholders. Don’t have the discussion around outcomes, have it around the purpose and meaning of higher education.

A framework for thinking about GA: it is complex and not “one size fits all”. Not all GA are at the same “level”, there are different types of “understand”, which means different types of assessment and teaching methods.

  • Precursor: approach it as a remedial function, “if only we got the right students”
  • Complementary: everybody needs “complementary” skills that are useful but not integral to domain-specific knowledge
  • Translation: applied knowledge in an intentional way, should be able to use knowledge, translating classroom knowledge into real world application, changing the way we think about the discipline
  • Enabling: need to be able to work in conditions of uncertainty, the world is unknowable, how to navigate uncertainty, develop a way of being in the world, about openness, going beyond the discipline to enable new ways of learning (difficult to pin down and difficult to teach, and assess, hard to measure)

The above ways of “understanding” are all radically different, yet many are put on the same level and taught and assessed in the same way. Policies and implementation needs to acknowledge that GA are different.

Gibbons: knowledge brought into the world and made real

The way we talk about knowledge can make it more or less powerful. Having a certain stance or attitude towards knowledge will affect how you teach and assess.

What is the link, if any, between the discipline specific lists and institutional / national higher education lists?

The National GAP – Graduate Attribute Project

What are the assessment tasks in a range of disciplines that generate convincing evidence of the achievement of graduate learning outcomes? What are the assurance processes trusted by disciplines in relation to those assessment tasks and judgments? Assessing and assuring graduate learning outcomes (AAGLO project). Here are the summary findings of the project.

Assessment for learning and not assessment of learning.

Coherent development and assessment of programme-level graduate learning outcomes requires an institutional and discipline statement of outcomes. Foundation skills? Translation attributes? Enabling attributes and dispositions? Traditional or contemporary conceptions of knowledge?

Assessment not only drives learning but also drives teaching.

  • Communication skills – Privileged
  • Information literacy – Privileged
  • Research and inquiry – Privileged
  • Ethical social professional understandings – Ignored (present in the lists, but not assessed)
  • Personal intellectual autonomy – Ignored (present in the lists, but not assessed)

Features of effective assessment practices:

  • Assessment for learning
  • Interconnected, multi-component, connected to other assessment, staged, not isolated
  • Authentic (about the real world), relevant (personally to the student), roles of students and assessors
  • Standards-based with effective communication of criteria, assessment for GA can’t be norm-referenced, must be standards-based
  • Involve multiple decision makers – including students
  • Programme level coherence, not just an isolated assessment but exists in relation to the programme

The above only works as evidence to support learning if it is coupled with quality assurance

  • Quality of task
  • Quality of judgment (calibration prior to assessment, and consensus afterwards)
  • Confidence

There is a need for programme-level assessment. Assessment is usually focused at a module level. There’s no need to assess on a module level if your programme level is effective. You can then do things like have assessments that cross modules and are carried through different year levels.

How does a university curriculum, teaching and learning effectively measure the achievement of learning outcomes? In order to achieve certain types of outcomes, we need to give them certain types of learning experiences.

Peter Knights “wicked competencies”: you can’t fake wickedness – it’s got to be the real thing, messy, challenging and consequential problems.

The outcomes can’t be used to differentiate programmes, so use teaching and learning methods and experiences to differentiate.

Stop teaching content. Use content as a framework to teach other things e.g. critical thinking, communication, social responsibility

5 lessons:

  1. Set the right (wicked) goals collaboratively
  2. Make a signature pedagogy for complex GA part of the 5 year plan
  3. Develop policies and procedures to encourage and reward staff
  4. Identify and provide sources of data that support curriculum renewal, rather than shut down conversations about curriculum
  5. Provide resources and change strategies to support curriculum renewal conversations

Teaching GA is “not someone else’s problem”, it needs to be integrated into discipline-specific teaching.

Be aware that this conversation is very much focused on “university” or “academic” learning, and ignores the many different ways of being and thinking that exist outside the university. How is Higher Education connecting with the outside world? Is there a conversation between us and everyone else?

We try to shape students into a mold of what we imagine they should be. We don’t really acknowledge their unique characteristics and embrace their potential contribution to the learning relationship?

We (academics) are also often removed from where we want our students to be. Think about critical thinking, inquiry-based learning, collaboration, embracing multiple perspectives. Is that how we learn? Our organisational culture drives us away from the GA we say we want our students to have.

Resources

Originally posted at /usr/space.