1 Kagak

Writing The Discussion Section Of A Psychology Research Paper

How to Write a Lab Report

Saul McLeod published 2011


Conducting a piece of research is a requirement for most psychology degree courses.

Of course, before you write up the report you have to research human behavior, and collect some data.  Final year students often find it difficult to choose a suitable research topic for their psychology lab report, and usually attempt to make things more complicated than they need to be.

Ask you supervisor for advice, but if in doubt, keep it simple, choose a memory experiment (you don't get extra marks for originality).  Remember to make sure your research in psychology adheres to ethical guidelines.  You will also be likely to write your paper according to APA style.


Ethical Considerations in Research

If the study involves any of the following, due consideration should be made about (1) whether to conduct the study, (2) how best to protect the participants’ rights.

Psychological or physical discomfort.

Invasion of privacy. If you are researching on private property, such as a shopping mall, you should seek permission.

Deception about the nature of the study or the participants’ role in it. Unless you are observing public behavior, participants should be volunteers and told what your research is about. If possible obtain informed consent. You should only withhold information if the research cannot be carried out any other way.

Research with children. In a school you will need the head teacher's consent and, if (s)he thinks it is advisable, the written consent of the children's’ parents/guardians. Testing children in a lab requires the written consent of parents/guardians.

Research with non-human animals. Experimentation with animals should only rarely be attempted. You must be trained to handle and care for the animals and ensure that their needs are met (food, water, good housing, exercise, gentle handling and protection from disturbance). Naturalistic observation poses fewer problems but still needs careful consideration; the animals may be disturbed especially where they are breeding or caring for young.

When conducting investigations, never:

    • Insult, offend or anger participants.

    • Make participants believe they may have harmed or upset someone else.

    • Break the law or encourage others to do it.

    • Contravene the Data Protection Act.

    • Copy tests or materials without permission of the copyright holder.

    • Make up data.

    • Copy other people’s work without crediting it.

    • Claim that somebody else’s wording is your own.

Infringement of any ethical guidelines may result in disqualification of the project.


Lab Report Format

Title page, abstract, references and appendices are started on separate pages (subsections from the main body of the report are not). Use double-line spacing of text, font size 12, and include page numbers.

The report should have a thread of argument linking the prediction in the introduction to the content in the discussion.


1. Title Page:

This must indicate what the study is about. It must include the IV & DV. It should not be written as a question.


2. Abstract: (you write this last)

The abstract comes at the beginning of your report but is written at the end.

The abstract provides a concise and comprehensive summary of a research report. Your style should be brief, but not using note form. Look at examples in journal articles. It should aim to explain very briefly (about 150 words) the following:

    • Start with a one/two sentence summary, providing the aim and rationale for the study.

    • Describe participants and setting: who, when, where, how many, what groups?

    • Describe the method: what design, what experimental treatment, what questionnaires, surveys or tests used.

    • Describe the major findings, which may include a mention of the statistics used and the significance levels, or simply one sentence summing up the outcome.

    • The final sentence(s) outline the studies 'contribution to knowledge' within the literature. What does it all mean? Mention implications of your findings if appropriate.


3. Introduction:

The purpose of the introduction is to explain where your hypothesis comes from. You must be explicit regarding how the research outlined links to the aims / hypothesis of your study.

    • Start with general theory, briefly introducing the topic.

    • Narrow down to specific and relevant theory and research. Two or three studies is sufficient.

    • There should be a logical progression of ideas which aids the flow of the report. This means the studies outlined should lead logically into your aims and hypotheses.

    • Do be concise and selective, avoid the temptation to include anything in case it is relevant (i.e. don't write a shopping list of studies).

    • Don’t turn this introduction into an essay.

    • Don’t spell out all the details of a piece of research unless it is one you are replicating.

    • Do include any relevant critical comment on research, but take care that your aims remain consistent with the literature review. If your hypothesis is unlikely, why are you testing it?

AIMS: The aims should not appear out of thin air, the preceding review of psychological literature should lead logically into the aims.

    • Write a paragraph explaining what you plan to investigate and why. Use previously cited research to explain your expectations. Later these expectations are formally stated as the hypotheses.

    • Do understand that aims are not the same as the hypotheses.

HYPOTHESES: State the alternate hypothesis and make it is clear, concise and includes the variables under investigation.


4. Method

  • Assume the reader has no knowledge of what you did and ensure that he/she would be able to replicate (i.e. copy) your study exactly by what you write in this section.

  • Write in the past tense.

  • Don’t justify or explain in the Method (e.g. why you choose a particular sampling method), just report what you did.

  • Only give enough detail for someone to replicate experiment - be concise in your writing.

USE THE FOLLOWING SUBHEADING:

Design –

State the experimental design, the independent variable label and name the different conditions/levels. Name the dependent variables and make sure it's operationalized. Identify any controls used, e.g. counterbalancing, control of extraneous variables.

Participants –

Identify the target population (refer to a geographic location) and type of sample. Say how you obtained your sample (e.g. opportunity sample). Give relevant details, e.g. how many, age range.

Materials –

Describe the materials used, e.g. word lists, surveys, computer equipment etc. You do not need to include wholesale replication of materials – instead include a ‘sensible’ (illustrate) level of detail.

Procedure –

Describe the precise procedure you followed when carrying out your research i.e. exactly what you did. Describe in sufficient detail to allow for replication of findings. Be concise in your description and omit extraneous / trivial details. E.g. you don't need to include details regarding instructions, debrief, record sheets etc.


5. Results:

The results section of a paper usually present the descriptive statistics followed by inferential statistics. Avoid interpreting the results (save this for the discussion).

Make sure the results are presented clearly and concisely. A table can be used to display descriptive statistics if this makes the data easier to understand. DO NOT include any raw data.

Use APA Style

  • Numbers reported to 2d.p. (incl. 0 before the decimal if < 1.00, e.g. “0.51”). The exceptions to this rule: Numbers which can never exceed 1.0 (e.g. p-values, r-values): report to 3d.p. and do not include 0 before the decimal place, e.g. “.001”.

  • Percentages and degrees of freedom: report as whole numbers.

  • Statistical symbols that are not Greek letters should be italicised (e.g. M, SD, t, X, F, p, d).

  • Include spaces either side of equals sign.

  • When reporting 95% CIs (confidence intervals), upper and lower limits are given inside square brackets, e.g. “95% CI [73.37, 102.23]”

What information to include:

    • The type of statistical test being used.

    • Means, SDs & 95% confidence intervals (CIs) for each IV level. If you have four to 20 numbers to present, a well-presented table is best, APA style.

    • Clarification of whether no difference or a significant difference was found the direction of the difference (only where significant).

    • The mean difference and 95% CIs (confidence intervals).

    • The effect size (this does not appear on the SPSS output).

For example - “A ____ test revealed there was a significant (not a significant) difference in the scores for IV level 1 (M =___, SD =___ CI [____, ____]) and IV level 2 (M =___, SD =___ CI [____, ____]) conditions; t(__)=____, p = ____”


6. Discussion:

    • Outline your findings in plain English (no statistical jargon) and relate your results to your hypothesis, e.g. is it supported or rejected?

    • Compare you results to background materials from the introduction section. Are your results similar or different? Discuss why/why not.

    • How confident can we be in the results? Acknowledge limitations, but only if they can explain the result obtained. If the study has found a reliable effect be very careful suggesting limitations as you are doubting your results. Unless you can think of any confounding variable that can explain the results instead of the IV, it would be advisable to leave the section out.

    • Suggest constructive ways to improve your study if appropriate.

    • What are the implications of your findings? Say what your findings mean for the way people behave in the real world.

    • Suggest an idea for further researched triggered by your study, something in the same area, but not simply an improved version of yours. Perhaps you could base this on a limitation of your study.

    • Concluding paragraph – Finish with a statement of your findings and the key points of the discussion (e.g. interpretation and implications), in no more than 3 or 4 sentences.


7. References:

The reference section is the list of all the sources cited in the essay (in alphabetical order). It is not a bibliography (a list of the books you used).

In simple terms every time you refer to a name (and date) of a psychologist you need to reference the original source of the information.

If you have been using textbooks this is easy as the references are usually at the back of the book and you can just copy them down. If you have been using websites then you may have a problem as they might not provide a reference section for you to copy.

References need to be set out APA style:

Books

Author, A. A. (year). Title of work. Location: Publisher.

Journal Articles

Author, A. A., Author, B. B., & Author, C. C. (year). Article title. Journal Title, volume number(issue number), page numbers

A simple way to write your reference section is use Google scholar. Just type the name and date of the psychologist in the search box and click on the 'cite' link.

Next, copy and paste the APA reference into the reference section of your essay.

Once again remember that references need to be in alphabetical order according to surname.


How to reference this article:

McLeod, S. A. (2011). Psychology research report. Retrieved from www.simplypsychology.org/research-report.html

Presenting Results

Authors face the significant challenge of presenting their results in the Journal of Pediatric Psychology (JPP) completely, yet succinctly and writing a convincing discussion section that highlights the importance of their research. The third and final in a series of editorials (Drotar, 2009a,b), this article provides guidance for authors to prepare effective results and discussion sections. Authors also should review the JPP website (http://www.jpepsy.oxfordjournals.org/) and consider other relevant sources (American Psychological Association, 2001; APA Publications and Communications Board Working Group on Journal Reporting Standards, 2008; Bem, 2004; Brown, 2003; Wilkinson & The Task Force on Statistical Inference, 1999).

Follow APA and JPP Standards for Presentation of Data and Statistical Analysis

Authors’ presentations of data and statistical analyses should be consistent with publication manual guidelines (American Psychological Association, 2001). For example, authors should present the sample sizes, means, and standard deviations for all dependent measures and the direction, magnitude, degrees of freedom, and exact p levels for inferential statistics. In addition, JPP editorial policy requires that authors include effect sizes and confidence intervals for major findings (Cumming & Finch, 2005, 2008; Durlak, 2009; Wilkinson & the Task Force on Statistical Inference, 1999; Vacha-Haase & Thompson, 2004).

Authors should follow the Consolidated Standards of Reporting Trials (CONSORT) when reporting the results of randomized clinical trials (RCTs) in JPP (Moher, Schultz, & Altman, 2001; Stinson-McGrath, & Yamoda, 2003). Guidelines have also been developed for nonrandomized designs, referred to as the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement (Des Jarlais, Lyles, Crepaz, & the TREND Group, 2004) (available from http://www.trend-statement.org/asp/statement.asp). Finally, studies of diagnostic accuracy, including sensitivity and specificity of tests, should be reported in accord with the Standards for Reporting of Diagnostic Accuracy (STARD) (Bossuyt et al., 2003) (http://www.annals.org/cgi/content/full/138/1/W1).

Finally, authors may also wish to consult a recent publication (APA Publications and Communications Board Working Group on Journal Reporting Standards, 2008) that contains useful guidelines for various types of manuscripts including reports of new data collection and meta-analyses. Guidance is also available for manuscripts that contain observational longitudinal research (Tooth, Ware, Bain, Purdie, & Dobson, 2005) and qualitative studies involving interviews and focus groups (Tong, Sainsbury, & Craig, 2007).

Provide an Overview and Focus Results on Primary Study Questions and Hypotheses

Readers and reviewers often have difficulty following authors’ presentation of their results, especially for complex data analyses. For this reason, it is helpful for authors to provide an overview of the primary sections of their results and also to take readers through their findings in a step-by-step fashion. This overview should follow directly from the data analysis plan stated in the method (Drotar, 2009b).

Readers appreciate the clarity of results that are consistent with and focused on the major questions and/or specific hypotheses that have been described in the introduction. Readers and reviewers should be able to identify which specific hypotheses were supported, which received partial support, and which were not supported. Nonsignificant findings should not be ignored. Hypothesis-driven analyses should be presented first, prior to secondary analyses and/or more exploratory analyses (Bem, 2004). The rationale for the choice of statistics and for relevant decisions within specific analyses should be described (e.g., rationale for the order of entry of multiple variables in a regression analysis).

Report Data that is Relevant to Statistical Assumptions

Authors should provide appropriate evidence, including quantitative results where necessary, to affirm that their data fit the assumptions required by the statistical analyses that are reported. When assumptions underlying statistical tests are violated, authors may use transformations of data and/or alternative statistical methods in such situations and should describe the rationale for them.

Integrate the Text of Results with Tables and/or Figures

Tables and figures provide effective, reader-friendly ways to highlight key findings (Wallgren, Wallgren, Perrson, Jorner, & Haaland, 1996). However, authors face the challenge of describing their results in the text in a way that is not highly redundant with information presented in tables and/or figures. Figures are especially useful to report the results of complex statistics such as structural equation modeling and path analyses that describe interrelationships among multiple variables and constructs. Given constraints on published text in JPP, tables and figures should always be used selectively and strategically.

Describe Missing Data

Reviewers are very interested in understanding the nature and impact of missing data. For this reason, information concerning the total number of participants and the flow of participants through each stage of the study (e.g., in prospective studies), the frequency and/or percentages of missing data at different time points, and analytic methods used to address missing data is important to include. A summary of cases that are missing from analyses of primary and secondary outcomes for each group, the nature of missing data (e.g., missing at random or missing not at random), and, if applicable, statistical methods used to replace missing data, and/or understand the impact of missing data (Schafer & Graham, 2002) are useful for readers.

Consider Statistical Analyses that Document Clinical Significance of Results

Improving the clinical significance of research findings remains an important but elusive goal for the field of pediatric psychology (Drotar & Lemanek, 2001). Reviewers and readers are very interested in the question: what do the findings mean for clinical care? For this reason, I strongly encourage authors to conduct statistical evaluations of the clinical significance of their results whenever it is applicable and feasible. In order to describe and document clinical significance, authors are strongly encouraged to use one of several recommended approaches including (but not limited to) the Reliable Change Index (Jacobson, Roberts, Burns, & McGlinchey, 1999; Jacobson & Truax, 1991; Ogles, Lambert, & Sawyer, 1995), normative comparisons (Kendall, Marrs-Garcia, Nath, & Sheldrick, 1999); or analyses of the functional impact of change (Kazdin, 1999, 2000). Statistical analyses of the cost effectiveness of interventions can also add to clinical significance (Gold, Russell, Siegel, & Weinstein, 1996). Authors who report data from quality of life measures should consider analyses of responsiveness and clinical significance that are appropriate for such measures (Revicki, Hays, Cella, & Sloan, 2008; Wywrich et al., 2005).

Include Supplementary Information Concerning Tables, Figures, and Other Relevant Data on the JPP Website

The managing editors of JPP appreciate the increasing challenges that authors face in presenting the results of complicated study designs and data analytic procedures within the constraints of JPP policy for manuscript length. For this reason, our managing editors will work with authors to determine which tables, analyses, and figures are absolutely essential to be included in the printed text version of the article versus those that are less critical but nonetheless of interest and can be posted on the JPP website in order to save text space. Specific guidelines for submitting supplementary material are available on the JPP website. We believe that increased use of the website to post supplementary data will not only save text space but will facilitate communication among scientists that is so important to our field and encouraged by the National Institutes of Health.

Writing the Discussion Section

The purpose of the discussion is to give readers specific guidance about what was accomplished in the study, the scientific significance, and what research needs to be done next.

The discussion section is very important to readers but extremely challenging for authors, given the need for a focused synthesis and interpretation of findings and presentation of relevant take-home messages that highlight the significance and implications of their research.

Organize and Focus the Discussion

Authors are encouraged to ensure that their discussion section is consistent with and integrated with all previous sections of their manuscripts. In crafting their discussion, authors may wish to review their introduction to make sure that the points that are most relevant to their study aims, framework, and hypotheses that have been previously articulated are identified and elaborated.

A discussion section is typically organized around several key components presented in a logical sequence including synthesis and interpretation of findings, description of study limitations, and implications, including recommendations for future research and clinical care. Moreover, in order to maximize the impact of the discussion, it is helpful to discuss the most important or significant findings first followed by secondary findings.

One of the most common mistakes that authors make is to discuss each and every finding (Bem, 2004). This strategy can result in an uninteresting and unwieldy presentation. A highly focused, lively presentation that calls the reader's attention to the most salient and interesting findings is most effective (Bem, 2004). A related problematic strategy is to repeat findings in the discussion that have already been presented without interpreting or synthesizing them. This adds length to the manuscript, reduces reader interest, and detracts from the significance of the research. Finally, it is also problematic to introduce new findings in the discussion that have not been described in the results.

Describe the Novel Contribution of Findings Relative to Previous Research

Readers and reviewers need to receive specific guidance from authors in order to identify and appreciate the most important new scientific contribution of the theory, methods, and/or findings of their research (Drotar, 2008; Sternberg & Gordeva, 2006). Readers need to understand how authors’ primary and secondary findings fit with what is already known as well as challenge and/or extend scientific knowledge. For example, how do the findings shed light on important theoretical or empirical issues and resolve controversies in the field? How do the findings extend knowledge of methods and theory? What is the most important new scientific contribution of the work (Sternberg & Gordeva, 2006)? What are the most important implications for clinical care and policy?

Discuss Study Limitations and Relevant Implications

Authors can engage their readers most effectively with a balanced presentation that emphasizes the strengths yet also critically evaluates the limitations of their research. Every study has limitations that readers need to consider in interpreting their findings. For this reason, it is advantageous for authors to address the major limitations of their research and their implications rather than leaving it to readers or reviewers to identify them. An open discussion of study limitations is not only critical to scientific integrity (Drotar, 2008) but is an effective strategy for authors: reviewers may assume that if authors do not identify key limitations of their studies they are not aware of them.

Description of study limitations should address specific implications for the validity of the inferences and conclusions that can be drawn from the findings (Campbell & Stanley, 1963). Commonly identified threats to internal validity include issues related to study design, measurement, and statistical power. Most relevant threats to external validity include sample bias and specific characteristics of the sample that limit generalization of findings (Drotar, 2009b).

Although authors’ disclosure of relevant study limitations is important, it should be selective and focus on the most salient limitations, (i.e., those that pose the greatest threats to internal or external validity). If applicable, authors may also wish to present counterarguments that temper the primary threats to validity they discuss. For example, if a study was limited by a small sample but nonetheless demonstrated statistically significant findings with a robust effect size, this should be considered by reviewers.

Study limitations often suggest important new research agendas that can shape the next generation of research. For this reason, it is also very helpful for authors to inform reviewers about the limitations of their research that should be addressed in future studies and specific recommendations to accomplish this.

Describe Implications of Findings for New Research

One of the most important features of a discussion section is the clear articulation of the implications of study findings for research that extends the scientific knowledge base of the field of pediatric psychology. Research findings can have several kinds of implications, such as the development of theory, methods, study designs data analytic approaches, or identification of understudied and important content areas that require new research (Drotar, 2008). Providing a specific agenda for future research based on the current findings is much more helpful than general suggestions. Reviewers also appreciate being informed about how specific research recommendations can advance the field.

Describe Implications of Findings for Clinical Care and/or Policy

I encourage authors to describe the potential clinical implications of their research and/or suggestions to improve the clinical relevance of future research (Drotar & Lemanek, 2001). Research findings may have widely varied clinical implications. For example, studies that develop a new measure or test an intervention have greater potential clinical application than a descriptive study that is not directly focused on a clinical application. Nevertheless, descriptive research such as identification of factors that predict clinically relevant outcomes may have implications for targeting clinical assessment or interventions concerning such outcomes (Drotar, 2006). However, authors be careful not to overstate the implications of descriptive research.

As is the case with recommendations for future research, the recommendations for clinical care should be as specific as possible. For example, in measure development studies it may be useful to inform readers about next steps in research are needed to enhance the clinical application of a measure.

This is the final in the series of editorials that are intended to be helpful to authors and reviewers and improve the quality of the science in the field of pediatric psychology. I encourage your submissions to JPP and welcome our collective opportunity to advance scientific knowledge.

Acknowledgments

The hard work of Meggie Bonner in typing this manuscript and the helpful critique of the associate editors of Journal of Pediatric Psychology and Rick Ittenbach are gratefully acknowledged.

Conflict of interest: None declared.

References

American Psychological Association

Publication manual of the American Psychological Association.

 , 

2001

5th

Washington, DC

Author

APA Publications and Communications Board Working Group on Journal Article Reporting Standards

Reporting standards for research in psychology. Why do we need them? What do they need to be?

American Psychologist

 , 

2008

, vol. 

63

 (pg. 

839

-

851

)
,  ,  . 

Writing the empirical journal article

The complete academic: a career guide. Pediatric psychology.

 , 

2004

2nd

Washington, DC

America Psychological Association

(pg. 

105

-

219

)
,  ,  ,  ,  ,  , et al. 

The STARD statement for reporting studies of diagnostic accuracy: Explanation and elaboration

Annals of Internal Medicine

 , 

2003

, vol. 

138

 (pg. 

W1

-

W12

)
,  . 

Experimental and quasi experimental designs for research.

 , 

1963

Chicago

Rand McNally

Putting research in context: Understanding confidence intervals from one or more studies.

Journal of Pediatric Psychology.

 , 

2008

Advance Access published December 18, 2008 
,  ,  . 

the TREND Group

Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND Statement.

American Journal of Public Health, 94

 , 

2004

(pg. 

361

-

366

Writing research articles for publication

Handbook of research methods in clinical child and pediatric psychology. Pediatric psychology.

 , 

2000

New York

Kluwer Academic/Plenum Publishers

(pg. 

347

-

374

)

Psychological interventions in childhood chronic illness.

 , 

2006

Washington, D.C.

American Psychological Association

Thoughts on establishing research significance and presenting scientific integrity

Journal of Pediatric Psychology

 , 

2008

, vol. 

33

 (pg. 

1

-

3

)

Editorial: Thoughts on improving the quality of manuscripts submitted to the Journal of Pediatric Psychology: Writing a convincing introduction

Journal of Pediatric Psychology

 , 

2009

, vol. 

34

 (pg. 

1

-

3

)

Editorial: How to report methods in the Journal of Pediatric Psychology

Journal of Pediatric Psychology

 , 

2009

Steps toward a clinically relevant science of interventions in pediatric settings

Journal of Pediatric Psychology

 , 

2001

, vol. 

26

 (pg. 

385

-

394

)

How to select, calculate, and interpret effect sizes.

Journal of Pediatric Psychology.

 , 

2009

Advance Access published February 16 
,  ,  ,  . 

Cost-effectiveness in health and medicine.

 , 

1996

New York

Oxford University Press

,  ,  ,  . 

Methods for defining and determining clinical significance of treatment effects: Description, application, and alternatives

Journal of Consulting and Clinical Psychology

 , 

1999

, vol. 

67

 (pg. 

300

-

307

)

Clinical significance: A statistical approach to defining meaningful change in psychotherapy research

Journal of Consulting and Clinical Psychology

 , 

1991

, vol. 

59

 (pg. 

12

-

19

)

Psychotherapy for children and adolescents: Directions for research and practice.

 , 

2000

New York

Oxford University Press

,  ,  ,  . 

Normative comparisons for the evaluation of clinical significance

Journal of Consulting and Clinical Psychology

 , 

1999

, vol. 

67

 (pg. 

285

-

299

)
,  ,  . 

The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials

Journal of the American Medical Association

 , 

2001

, vol. 

285

 (pg. 

1987

-

1991

)
,  ,  . 

Clinical significance of the National Institute of Mental Health Treatment of Depression Collaborative Research Program data

Journal of Consulting and Clinical Psychology

 , 

1995

, vol. 

63

 (pg. 

321

-

326

)

Google Scholar

Leave a Comment

(0 Comments)

Your email address will not be published. Required fields are marked *