NURS 350 DQ Quantitative Research Designs and Approaches

Want create site? With Free visual composer you can do it easy.

NURS 350 DQ Quantitative Research Designs and Approaches

NURS 350 DQ Quantitative Research Designs and Approaches 

DQ1 Describe quantitative research designs that are used to support changes in nursing practice. Choose one and explain why you chose it. Give an example of how this research design is used to drive change in nursing practices.

DQ2 What is the difference between statistical significance and clinical significance? Explain why statistically significant results in a study do not always mean that the study is clinically significant. Provide an example.

The overarching aim of science education research is to improve the quality of science instruction and foster students’ learning about science. Teachers want to know how they can guide individual students to learn about science and how they can teach a class of students about science. Principals and teachers, among others, want to know how to support the science faculty at their school to increase the average achievement of their students and to reduce the variance in achievement in science in comparison with other schools. Policymakers want to know how to allocate resources most efficiently in order to increase educational achievement across their district, state, or nation in relation to the development of economic and social affairs. Finally, researchers must be able to use the results of other re-searchers for their own investigations. However, to obtain findings that teachers, headmasters and policymakers can rely upon, the findings need to be trustworthy. Therefore, certain quality measures have to be applied to investigations. Quality measures that are typically considered are objectivity, reliability, and validity. This chapter details approaches to, and designs of, quantitative research in science Edu-cation, particularly the Rasch analysis as an example, in order to obtain high-quality findings that are objective, reliable, and valid about science as well as re-spective framework conditions.

 

To perform quantitative studies, one has to start with a research question
including the operational definitions of variables and the purpose of the study.
The results of investigating each of the levels of the educational system and the
research-based conclusions should be of a kind that other researchers can rely on;
and such conclusions should also be a base for extending their research.
Trustworthiness of Evidence
There are many issues which must be addressed as one seeks to ensure the
trustworthiness of research results. As researchers, we have to be aware of the
manifoldness of variables and conditions that possibly influence research quality.
For instance, it appears to be easy to compare the average achievement of two
classes in one school. But without taking into account issues such as the classes
average pre-knowledge, reading ability, socio-economic background, and cogni-
tive abilities, mismeasurement is possibly too large for drawing conclusions of
certainty. This is true for quantitative as well as for theory-based qualitative stud-
ies which are addressed in this chapter. For a description of grounded theory and
design based research see chapter XX in this handbook. For example, the ability
of a participant to express scientific features in a certain language or the pre-
knowledge about a specific scientific concept might influence the participant’s re-
sponses in a semi-structured interview. Theory-based qualitative researchers
should also consider these factors even in case studies in order to produce reliable
6
interpretations of the observed dialogs between teachers and students, or between
students, in classroom activities. Therefore, in the first step, we have to think very
carefully about possible confounding variables and possible influences of such
variables upon assessment, motivation, or other categories with regard to the fo-
cus of the investigation. All in all, the question of trustworthiness and generaliza-
bility of research results has to be answered to address the different ways in which
the aforementioned stakeholders consider the results of research for improving
teacher education and research. Additionally, knowing about the confounding var-
iables means understanding the limitations of the study and aids the interpretation
how important it is to communicate and to teach content in a way that reflects the
state of the art in teaching by referring to empirically evident results. Researchers
should be able to tell future teachers how they can increase the probability of their
own teaching being of high-quality. Often enough teacher education cannot refer
to empirically based results; this therefore reproduces intuitive beliefs and myths.
For example, a great amount of time is spent on teaching future teachers about
students’ everyday conceptions or misconceptions although there is little evidence
that such teachers’ knowledge is beneficial for students’ learning and thus for im-
provement in science instruction. In fact, a recent study comparing teaching and
learning physics in Finland and Germany shows that Finish teachers know less
7
about misconceptions than do German teachers, but their students learn signifi-
cantly much more than do German students (Olszewski, Neumann, & Fischer,
2009; Olszewski, 2010). Despite these and other findings that knowing about mis-
conceptions is not as important for high-quality teaching (Hammer, 1996), there
are a multitude of studies on students’ misconceptions and some universities have
advocated that the knowledge about misconceptions in all sciences is important
for a science teacher (e.g., New York Science Teacher, 2013). This begs the ques-
tion of whether misconceptions are as important as what science educators have
considered and whether training on knowing misconceptions might be a waste of
learning time in science teacher education. As authors, we believe that this leads
to the natural conclusion that in science teacher education, teacher educators
should teach only, or at least mostly, those content areas that can be trusted from
the standpoint of the commonly agreed rules of the research community. But
sadly this perspective has not always been applied. For example, in a meta-analy-
sis study of inquiry-based science teaching, Furtak, Seidel, Iverson, and Briggs
(2012) started with about 5,800 studies. After excluding papers in a first step us-
ing criteriasuch as studies not in English, outcome variables not about science
achievement, studies not published in peer-reviewed journal, data not provided in
the paperFurtak et al. found that only 59 papers remained. After criteria of good
research practicessuch as pre-post design, two-group, cognitive outcome
measures and effect size calculations were applied in further selection, only 15
8
papers from the 5,800 studies remained. After asking some authors of the ex-
cluded articles for data and additional calculations, Furtak et al. found that 22 of
the excluded studies could be included in their meta-analysis. In another meta-
analysis, Ruiz-Primo, Briggs, Shepard, Iverson, and Huchton (2008) analyzed the
impact of innovations in physics education. They started with more than 400 pa-
pers and found that only 51 papers that reported the studies could be used for a
quantitative synthesis of the effects. Profound meta-analysis such as those de-
tailed above are an excellent example of applied trustworthiness. They make clear
that the standards of the research community need to be applied to studies to be
able to use the published results for further research.
Obtaining Evidence
One of the main problems of research in science education in general is to
classify different types of cognition. The direct sensory experience of human be-
ings is generally incomplete and not dependable because of the restricted sensitiv-
ity range of different types of our organs of perception. Everyday communication
and other social interaction rely on agreed common knowledge and intuition
which is fuzzy by nature to be applicable and trustworthy in social interaction,
and can even be wrong. At least, as researchers, we are not able to guarantee the
correctness of such communication and interaction to a certain extent. If we are
not sure, we can ask experts about their opinion. However, experts’ opinions can
9
be wrong and mistaken, and their reasoning, even if it is based on certain logical
systems and rules, can be based on false premises. To avoid mistakes and to ob-
tain trustworthy conclusions, we have to use scientific methods and procedures
which are agreed upon by researchers and experts, and which allow more trust-
worthy statements than those based upon a few experts’ opinions. Studies must be
linked to the whole range of relevant past studies and conducted using scientific
methods and utilizing quality criteria. Therefore, planning and performing a study
must include a theoretical model with regard to relevant past work, rigorous sam-
pling, well-elaborated instrument construction that involves piloting of the instru-
ments, adequate experiment design, up-to-date psychometrics, carefully captured
data, and rigorous interpretation of results. Within the research community, these
criteria and the process must again be discussed and agreed upon. Doing so will
then allow for estimating the quality of the results of all investigations and this
will also provide implications for further research and practice. The necessary
agreement in a community of researchers requires publicity and discussion such
as that which has taken place with regard to nature of science (e.g., Lederman,
Abd-El Khalick, F., Bell, R. L., & Schwarz, R., 2002; Osborne, Collins, Ratcliffe,
Millar, & Duschl, 2003; Ledermann, 2006, 2007) or professional knowledge of
science teachers (e.g., Magnusson, Krajcik, & Borko, 1999; Park & Oliver, 2008;
cf. Gess-Newsome & Lederman, 1999). Also, it is important in science education
that researchers be able to replicate studies. In natural sciences, replication and
10
public discussion is an indispensable part of evaluating the trustworthiness of sci-
enteric investigations. We suggest that national and international associations for
science education should allow and support publishing and reporting of replica-
tions of research in their journals and at conferences.
Obviously, empirical research can never be a method of proving anything
(Popper, 1959). The results have to be discussed and interpreted but conclusions
are tentative and open to revision. Therefore, it is necessary that all published
work includes a detailed description of the project data with a clear explanation of
the process of constructing the respective measures and the analysis of the
measures. Raw data must be available by demand
Did you find apk for android? You can find new Free Android Games and apps.