Saturday, June 20, 2009

defining the problem II

The problem can be based on vital or intellectual matters. It refers to acceptance of one specific difficulty or obstacle which motivates an interest to know about it.
Some problem characteristics
1. Matter to solve and clarify, proportion of difficulty and solution is doubtful.
2. Group of facts or circumstances which troubles the consequence of one specific objective.
3. Proposition addressed to figure out the way to obtain results when certain data is known.
The problem mustn’t be only selected but defined and formulated. It means people might understand it. If the problem is well defined and formulated the researcher copes with clarity, comprehension, and understanding about the problem. On the other hand, if the problem were wrong defined and formulated, then it would be confusing, without structure and its development wouldn’t be accurate.
Once you’ll have identified a problem; consult about the problem, comment with your own words what you pretend investigating and how you pretend to do it. If people understand it, then you have a problem to work on.
Advice when selecting and identifying a problem
1. Choose a topic in which you are specialized.
2. Topic might be personal.
3. The topic must be a motivating agent from beginning to end.
4. Investigate about something you deal with knowledge, experience and enough- existing sources
5. Focus your problem to one specific and concrete aspect. It should be under one specific question to be answered.
Most of problems show up in our daily doing (in our professional praxis).
Three aspects to keep in mind before selecting a problem
1. Vocation
2. Be intellectual qualified
3. Have enough sources
Some characteristics a topic for investigation might have
1. Be innovative, it means be presented as singular and interested to offer ideas, hypothesis, and guidance for future studies.
2. Results might work for designing strategies to come up with solutions.
3. Be innovative and socially projected.
Some suggestions for selecting a topic for investigation
1. Interest
2. No duplications
3. Avoid prejudices
4. Opportunities to be done
The problem can be stated as a question or describing it on paragraph precisely.
Summary
The problem
a. The real problem is the one which tries to figure out unknown knowledge.
b. The inquiry must be the morrow for the investigation.
c. Two processes involved: identifying and formulating
d. Formulating and determining a problem equals the burden of work to be done.
Problem characteristics
a. Précised
b. Limited existence: no too broad
c. Original
d. Viable: favorable to solve it.
Types of problems
a. Academic: intellectual and professional
b. Information: scientific
c. Action: when relevance of investigation is inquired, then its success.
d. Vital: solve a problem in society
e. Pure or applied: for science progress

Defining the problem

The problem can be based on vital or intellectual matters. It refers to acceptance of one specific difficulty or obstacle which motivates an interest to know about it.
Some problem characteristics
1. Matter to solve and clarify, proportion of difficulty and solution is doubtful.
2. Group of facts or circumstances which troubles the consequence of one specific objective.
3. Proposition addressed to figure out the way to obtain results when certain data is known.
The problem mustn’t be only selected but defined and formulated. It means people might understand it. If the problem is well defined and formulated the researcher copes with clarity, comprehension, and understanding about the problem. On the other hand, if the problem were wrong defined and formulated, then it would be confusing, without structure and its development wouldn’t be accurate.
Once you’ll have identified a problem; consult about the problem, comment with your own words what you pretend investigating and how you pretend to do it. If people understand it, then you have a problem to work on.
Advice when selecting and identifying a problem
1. Choose a topic in which you are specialized.
2. Topic might be personal.
3. The topic must be a motivating agent from beginning to end.
4. Investigate about something you deal with knowledge, experience and enough- existing sources
5. Focus your problem to one specific and concrete aspect. It should be under one specific question to be answered.
Most of problems show up in our daily doing (in our professional praxis).
Three aspects to keep in mind before selecting a problem
1. Vocation
2. Be intellectual qualified
3. Have enough sources
Some characteristics a topic for investigation might have
1. Be innovative, it means be presented as singular and interested to offer ideas, hypothesis, and guidance for future studies.
2. Results might work for designing strategies to come up with solutions.
3. Be innovative and socially projected.
Some suggestions for selecting a topic for investigation
1. Interest
2. No duplications
3. Avoid prejudices
4. Opportunities to be done
The problem can be stated as a question or describing it on paragraph precisely.
Summary
The problem
a. The real problem is the one which tries to figure out unknown knowledge.
b. The inquiry must be the morrow for the investigation.
c. Two processes involved: identifying and formulating
d. Formulating and determining a problem equals the burden of work to be done.
Problem characteristics
a. Précised
b. Limited existence: no too broad
c. Original
d. Viable: favorable to solve it.
Types of problems
a. Academic: intellectual and professional
b. Information: scientific
c. Action: when relevance of investigation is inquired, then its success.
d. Vital: solve a problem in society
e. Pure or applied: for science progress

Wednesday, June 10, 2009

SECOND QUESTIONAIRE/ RESEARCH METHODS

sorry for tardiness, but I've got issues with my blog.......


Second questionairre / research methods
1. What’s social research?
2. What disciplines are involved in social research?
3. What are the two well known categories in which social methods are subdivided?
4. What are quantitative methods focused on?
5. What are qualitative methods focused on?
6. What do qualitative and quantitative methods share in common?
7. Which tools do quantitative researchers resort to?
8. Which tools do qualitative researchers resort to?
9. What was the main problem focused on the ordinary human inquiry?
10. What does Charles C. Ragin state in his book?
11. What does social research attempt to?
12. What’s the difference between fact and theory?
13. What’s an axiom or postulate?
14. What are propositions?
15. Where are hypotheses derived from?
16. What are variables?
17. What do theories describe?
18. Define independent variable
19. What’s an idiographic explanation?
20. What’s a nomothetic explanation?
21. Read the info about the qualitative and quantitative debate. then, write down your perception about it.
22. What are the paradigms social researchers usually follow?
23. What are the primary assumptions of the ethics in social research?
24. What are the social research techniques?
25. Read and cluster the information about quantitative and qualitative research.

Monday, June 1, 2009

RESEARCH METHODS

Universidad Latina de Costa Rica
Guápiles Branch
Research Methods
Prof. Robertho Mesén Hidalgo, Lic.
First Questionnaire

1 Read the definitions given to the term research and come up with your own definition.
2. How is scientific research defined?
3. How does the author define fundamental or pure research?
4. Why is basic research pointed out as explanatory?
5. How was basic research considered?
6. Why is research a structural process?
7. What are the steps followed by the scientific method?
8. What’s the main issue about the hypothesis in the scientific method?
9. What does a hypothesis allow?
10. What does the historical method comprise?
11. What are some common concepts to be considered in the historical research?
12. In research methods, what’s the main objective to consider?
13. Based on the excerpts publishing and research funding. How the lack of them affects your professional development? Does it limit you? why?
14. What’s the etymological origin for the word research? What does it mean exactly?
15. What’s applied research? Is it necessary for your professional development? Justify
16. Does the educational system have an applied research department? Testing
17. What does a hypothesis consist of?
18. In the scientific method, what is the scientific hypothesis required to?
19. What do specialists base hypothesis on?
20. During research, what may happen to a hypothesis?
21. What’s the difference between scientific hypothesis and scientific theory?
22. What must be done when the hypothesis is proved?
23. Does the hypothesis always follow a mathematical model?
24. What must the hypotheses enable?
25. According to the hypothesis theory, what does the scientific method involve?
26. If the investigator knows the outcome of a test, what does the hypothesis turn up?
27. Can a hypothesis be confirmed? Why?
28. When researchers weigh up alternative hypothesis, which aspects may they take into consideration?
29. What’s the difference between conceptual definition and operational definition?
30. What does historical method comprises and refers to?
31. What does higher criticism involve?
32. What’s lower criticism? What approaches does textual criticism include?
33. Define internal criticism
34. Paraphrase the checklist for evaluating eyewitness testimony.
35. Analyze -in your own- each of the six particular conditions to accept the oral tradition. Further discussion
36. Paraphrase the seven conditions for a successful argument.


Universidad Latina de Costa Rica
Guápiles Branch
Research Methods
Lic. Robertho Mesén Hidalgo
II – 2009

Research
Research is defined as human activity based on intellectual application in the investigation of matter. The primary purpose for applied research is discovering, interpreting, and the development of methods and systems for the advancement of human knowledge on a wide variety of scientific matters of our world and the universe. Research can use the scientific method, but need not do so.
Scientific research relies on the application of the scientific method, a harnessing of curiosity. This research provides scientific information and theories for the explanation of the nature and the properties of the world around us. It makes practical applications possible. Scientific research is funded by public authorities, by charitable organisations and by private groups, including many companies. Scientific research can be subdivided into different classifications according to their academic and application disciplines.
Historical research is embodied in the historical method.
The term research is also used to describe an entire collection of information about a particular subject.
Basic research
Basic research (also called fundamental or pure research) has as its primary objective the advancement of knowledge and the theoretical understanding of the relations among variables (see statistics). It is exploratory and often driven by the researcher’s curiosity, interest, and intuition. Therefore, it is sometimes conducted without any practical end in mind, although it may have confounding variables (unexpected results) pointing to practical applications. The terms “basic” or “fundamental” indicate that, through theory generation, basic research provides the foundation for further, sometimes applied research. As there is no guarantee of short-term practical gain, researchers may find it difficult to obtain funding for basic research.
Examples of questions asked in basic research:
Does string theory provide physics with a grand unification theory?
Which aspects of genomes explain organismal complexity?
Is it possible to prove or disprove Goldbach's conjecture? (i.e., that every even integer greater than 2 can be written as the sum of two, not necessarily distinct primes)
Traditionally, basic research was considered as an activity that preceded applied research, which in turn preceded development into practical applications. Recently, these distinctions have become much less clear-cut, and it is sometimes the case that all stages will intermix. This is particularly the case in fields such as biotechnology and electronics, where fundamental discoveries may be made alongside work intended to develop new products, and in areas where public and private sector partners collaborate in order to develop greater insight into key areas of interest. For this reason, some now prefer the term frontier research.
Research processes
Scientific researcher
Scientific method
Generally, research is understood to follow a certain structural process. Though step order may vary depending on the subject matter and researcher, the following steps are usually part of most formal research, both basic and applied:
Formation of the topic
Hypothesis
Conceptual definitions
Operational definitions
Gathering of data
Analysis of data
Test, revising of hypothesis
Conclusion, iteration if necessary
A common misunderstanding is that by this method a hypothesis can be proven or tested. Generally a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. If the outcome is inconsistent with the hypothesis, then the hypothesis is rejected. However, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. This careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. In this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true (or better, predictive), but this is not the same as it having been proven. A useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. As the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. In this case a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it.
Historical
Historical method
The historical method comprises the techniques and guidelines by which historians use historical sources and other evidence to research and then to write history. There are various history guidelines commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis. This includes higher criticism and textual criticism. Though items may vary depending on the subject matter and researcher, the following concepts are usually part of most formal historical research:
Identification of origin date
Evidence of localization
Recognition of authorship
Analysis of data
Identification of integrity
Attribution of credibility
when we sign to partake into a historical method, what shall we do?
Research methods
The goal of the research process is to produce new knowledge, which takes three main forms (although, as previously discussed, the boundaries between them may be fuzzy):
Exploratory research, which structures and identifies new problems
Constructive research, which develops solutions to a problem
Empirical research, which tests the feasibility of a solution using empirical evidence
Research can also fall into two distinct types:
Primary research
Secondary research
Research is often conducted using the hourglass model Structure of Research. The hourglass model starts with a broad spectrum for research, focusing in on the required information through the methodology of the project (like the neck of the hourglass), then expands the research in the form of discussion and results.
Publishing
Academic publishing describes a system that is necessary in order for academic scholars to peer review the work and make it available for a wider audience. The 'system', which is probably disorganised enough not to merit the title, varies widely by field, and is also always changing, if often slowly. Most academic work is published in journal article or book form. In publishing, STM publishing is an abbreviation for academic publications in science, technology, and medicine.
Most established academic fields have their own journals and other outlets for publication, though many academic journals are somewhat interdisciplinary, and publish work from several distinct fields or subfields. The kinds of publications that are accepted as contributions of knowledge or research vary greatly between fields.
Academic publishing is undergoing major changes, emerging from the transition from the print to the electronic format. Business models are different in the electronic environment. Since about the early 1990s, licensing of electronic resources, particularly journals, has been very common. Presently, a major trend, particularly with respect to scholarly journals, is open access. There are two main forms of open access: open access publishing, in which the articles or the whole journal is freely available from the time of publication, and self-archiving, where the author makes a copy of their own work freely available on the web.
Research funding
Research funding
Most funding for scientific research comes from two major sources, corporations (through research and development departments) and government (primarily through universities and in some cases through military contractors). Many senior researchers (such as group leaders) spend more than a trivial amount of their time applying for grants for research funds. These grants are necessary not only for researchers to carry out their research, but also as a source of merit.
Some faculty positions require that the holder has received grants from certain institutions, such as the US National Institutes of Health (NIH). Government-sponsored grants (e.g. from the NIH, the National Health Service in Britain or any of the European research councils) generally have a high status.
Etymology
The word research derives from the French recherche, from rechercher, to search closely where "chercher" means "to search"; its literal meaning is 'to investigate thoroughly.
Academic conference
Advertising research
Conceptual framework
Creativity techniques
Demonstrative evidence
Due diligence
Dialectical research
Empirical evidence
Empirical research
European Charter for Researchers
Internet research
Innovation
Lab notebook
List of fields of doctoral studies
Marketing research
National Council of University Research Administrators (NCURA)
Open research
Operations research
Original research
Participatory action research
Psychological research methods
Qualitative Doctoral Dissertation Proposal
Research and development
Social research
Applied research
Applied research is research accessing and using some part of the research communities' (the academy's) accumulated theories, knowledge, methods, and techniques, for a specific, often state, commercial, or client driven purpose. Applied research is often contrasted with pure research in debates about research ideals, programs, and projects.
Basic Definition
Every organizational entity engages in applied research. The basic definition for applied research is any fact gathering project that is conducted with an eye to acquiring and applying knowledge that will address a specific problem or meet a specific need within the scope of the entity. Just about any business entity or community organization can benefit from engaging in applied research. Here are a couple of examples of how applied research can help an organization grow.
When most people think of applied research, there is a tendency to link the term to the function of research and development (R and D) efforts. For business entities, R and D usually is involved with developing products that will appeal to a particular market sector and generate revenue for the company. The research portion of the R and D effort will focus on uncovering what needs are not being met within a targeted market and use that information to begin formulating products or services that will be attractive and desirable. This simplistic though systematized approach may also be applied to existing products as well, leading to the development of new and improved versions of currently popular offerings. Thus, applied research can open up new opportunities within an existing client base, as well as allow the cultivation of an entirely new sector of consumers.
Applied research in NPOs
Non-profit organizations also can utilize the principles of applied research. Most of these types of organizations have a specific goal in mind. This may be to attract more people to the organization, or to raise public awareness on a given issue, such as a disease. The concept of applied research in this scenario involves finding out what attracts people to a cause, and then developing strategies that will allow the non-profit entity to increase the public profile of the organization, and entice people to listen to what they have to say and offer.
Working models for applied research
Applied research can be very simplistic within a given application or it can become quite complicated. While the principle of applied research is easily grasped, not every organization contains persons who are competent in the process of engaging in applied research. Fortunately, there are a number of professionals who are able to step in and help any entity create a working model for applied research.
In some cases, this may be the most productive approach, since an outsider often notices information that may be easily overlooked by those who are part of the organization. Whether implemented as an internal effort or outsourced to professionals who routinely engage in applied research, the result is often a higher public profile for the organization, and improved opportunities for meeting the goals of the entity.
Hypothesis
A hypothesis (from Greek ὑπόθεσις [iˈpoθesis]) consists either of a suggested explanation for an observable phenomenon or of a reasoned proposal predicting a possible causal correlation among multiple phenomena. The term derives from the Greek, hypotithenai meaning "to put under" or "to suppose." The scientific method requires that one can test a scientific hypothesis. Scientists generally base such hypotheses on previous observations or on extensions of scientific theories. Even though the words "hypothesis" and "theory" are often used synonymously in common and informal usage, a scientific hypothesis is not the same as a scientific theory. A hypothesis is never to be stated as a question, but always as a statement with an explanation following it. It is not to be a question because it states what the experimenter thinks will occur. Hypotheses are usually written in the "if-then form": If X, then Y.
In early usage, scholars often referred to a clever idea or to a convenient mathematical approach that simplified cumbersome calculations as a hypothesis; when used this way, the word did not necessarily have any specific meaning. Cardinal Bellarmine gave a famous example of the older sense of the word in the warning issued to Galileo in the early 17th century: that he must not treat the motion of the Earth as a reality, but merely as a hypothesis.
In common usage in the 21st century, a hypothesis refers to a provisional idea whose merit requires evaluation. For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms. A hypothesis requires more work by the researcher in order to either confirm or disprove it. In due course, a confirmed hypothesis may become part of a theory or occasionally may grow to become a theory itself. Normally, scientific hypotheses have the form of a mathematical model. Sometimes, but not always, one can also formulate them as existential statements, stating that some particular instance of the phenomenon under examination has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Any useful hypothesis will enable predictions by reasoning (including deductive reasoning). It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction may also invoke statistics and only talk about probabilities. Karl Popper, following others, has argued that a hypothesis must be falsifiable, and that one cannot regard a proposition or theory as scientific if it does not admit the possibility of being shown false. Other philosophers of science have rejected the criterion of falsifiability or supplemented it with other criteria, such as verifiability (e.g., verificationism) or coherence (e.g., confirmation holism). The scientific method involves experimentation on the basis of hypotheses in order to answer questions and explore observations.
In framing a hypothesis, the investigator must not currently know the outcome of a test or that it remains reasonably under continuing investigation. Only in such cases does the experiment, test or study potentially increase the probability of showing the truth of a hypothesis. If the researcher already knows the outcome, it counts as a "consequence" — and the researcher should have already considered this while formulating the hypothesis. If one cannot assess the predictions by observation or by experience, the hypothesis classes as not yet useful, and must wait for others who might come afterward to make possible the needed observations. For example, a new technology or theory might make the necessary experiments.
Evaluating hypotheses
Karl Popper's hypothetico-deductive method (also known as the method of "conjectures and refutations") demands falsifiable hypotheses, framed in such a manner that the scientific community can prove them false (usually by observation). According to this view, a hypothesis cannot be "confirmed", because there is always the possibility that a future experiment will show that it is false. Hence, failing to falsify a hypothesis does not prove that hypothesis: it remains provisional. However, a hypothesis that has been rigorously tested and not falsified can form a reasonable basis for action, i.e., we can act as if it is true, until such time as it is falsified. Just because we've never observed rain falling upward, doesn't mean that we never will—however improbable, our theory of gravity may be falsified some day.
Popper's view is not the only view on evaluating hypotheses. For example, some forms of empiricism hold that under a well-crafted, well-controlled experiment, a lack of falsification does count as verification, since such an experiment ranges over the full scope of possibilities in the problem domain. Should we ever discover some place where gravity did not function, and rain fell upward, this would not falsify our current theory of gravity (which, on this view, has been verified by innumerable well-formed experiments in the past)--it would rather suggest an expansion of our theory to encompass some new force or previously undiscovered interaction of forces. In other words, our initial theory as it stands is verified but incomplete. This situation illustrates the importance of having well-crafted, well-controlled experiments that range over the full scope of possibilities for applying the theory.
In recent years philosophers of science have tried to integrate the various approaches to evaluating hypothesis, and the scientific method in general, to form a more complete system that integrates the individual concerns of each approach. Notably, Imre Lakatos and Paul Feyerabend have produced novel attempts at such a synthesis. Both men also happen to be former students of Popper.
Scientific hypothesis
People refer to a trial solution to a problem as a hypothesis — often called an "educated guess" — because it provides a suggested solution based on the evidence. Experimenters may test and reject several hypotheses before solving the problem.
According to Schick and Vaughn, researchers weighing up alternative hypotheses may take into consideration:
Testability (compare falsifiability as discussed above)
Simplicity (as in the application of "Occam's razor", discouraging the postulation of excessive numbers of entities)
Scope - the apparent application of the hypothesis to multiple cases of phenomena
Fruitfulness - the prospect that a hypothesis may explain further phenomena in the future
Conservatism - the degree of "fit" with existing recognized knowledge-systems
Conceptual definition

· A conceptual definition is an element of the scientific research process, in which a specific concept is defined as a measurable occurrence. It basically gives you the meaning of the concept. It is mostly used in fields of philosophy, psychology, communication studies. This is especially important when conducting a content analysis.
· Examples of ideas that are often conceptually defined include intelligence, knowledge, tolerance, and preference.
· Following the establishment of a conceptual definition, the researcher must use an operational definition to indicate how the abstract concept will be measured.
· Conceptual vs operational definition
Conceptual definition
Operational definition
Weight: a measurement of gravitational force acting on an object
a result of measurement of an object on a Newton spring scale
Historical method
The historical method comprises the techniques and guidelines by which historians use primary sources and other evidence to research and then to write history. The question of the nature, and indeed the possibility, of sound historical method is raised in the philosophy of history, as a question of epistemology. The following summarizes the history guidelines commonly used by historians in their work, under the headings of external criticism, internal criticism, and synthesis.
External criticism: authenticity and provenance
Garraghan divides criticism into six inquiries.
When was the source, written or unwritten, produced (date)?
Where was it produced (localization)?
By whom was it produced (authorship)?
From what pre-existing material was it produced (analysis)?
In what original form was it produced (integrity)?
What is the evidential value of its contents (credibility)?
The first four are known as higher criticism; the fifth, lower criticism; and, together, external criticism. The sixth and final inquiry about a source is called internal criticism.
R. J. Shafer on external criticism: "It sometimes is said that its function is negative, merely saving us from using false evidence; whereas internal criticism has the positive function of telling us how to use authenticated evidence."
Higher criticism

R. J. Shafer writes, "Determination of authorship and date involves one or all of the following: (a) content analysis, (b) comparison with the content of other evidence, (c) tests of the physical properties of the evidence."[3] Content analysis includes examinations of anachronisms in language, datable references, and consistency with a cultural setting. Comparison with other writings may involve palaeography, the study of style of handwriting, the study of stylometry and comparison of literary style with known authors, or something as simple as a reference to the document's author in another one of his works or by a contemporary. Physical properties include the properties of the paper, the consistency of the ink, and the appearance of a seal, as well as the results of radioactive carbon dating.
Lower criticism
For more details on this topic, see Textual criticism.
Lower criticism is more frequently known as "textual criticism," and it is concerned with determining an accurate text in cases where we have copies instead of the original. Approaches to textual criticism include eclecticism (selecting or choosing from various sources), stemmatics and cladistics (classification of organisms based on the branchings of descendant lineages from a common ancestor). At the heart of eclecticism is that one should adopt the reading as original that most easily explains the derivation of the alternative readings. Stemmatics attempts to construct a "family tree" of extant manuscripts to help determine the correct reading. Cladistics makes use of statistical analysis in a similar endeavor.
Internal criticism: historical reliability
Noting that few documents are accepted as completely reliable, Louis Gottschalk sets down the general rule, "for each particular of a document the process of establishing credibility should be separately undertaken regardless of the general credibility of the author." An author's trustworthiness in the main may establish a background probability for the consideration of each statement, but each piece of evidence extracted must be weighed individually.
Eyewitness evidence
R. J. Shafer offers this checklist for evaluating eyewitness testimony:
Is the real meaning of the statement different from its literal meaning? Are words used in senses not employed today? Is the statement meant to be ironic (i.e., mean other than it says)?
How well could the author observe the thing he reports? Were his senses equal to the observation? Was his physical location suitable to sight, hearing, touch? Did he have the proper social ability to observe: did he understand the language, have other expertise required (e.g., law, military); was he not being intimidated by his wife or the secret police?
How did the author report?, and what was his ability to do so?
Regarding his ability to report, was he biased? Did he have proper time for reporting? Proper place for reporting? Adequate recording instruments?
When did he report in relation to his observation? Soon? Much later?
What was the author's intention in reporting? For whom did he report? Would that audience be likely to require or suggest distortion to the author?
Are there additional clues to intended veracity? Was he indifferent on the subject reported, thus probably not intending distortion? Did he make statements damaging to himself, thus probably not seeking to distort? Did he give incidental or casual information, almost certainly not intended to mislead?
Do his statements seem inherently improbable: e.g., contrary to human nature, or in conflict with what we know?
Remember that some types of information are easier to observe and report on than others.
Are there inner contradictions in the document?
Louis Gottschalk adds an additional consideration: "Even when the fact in question may not be well-known, certain kinds of statements are both incidental and probable to such a degree that error or falsehood seems unlikely. If an ancient inscription on a road tells us that a certain proconsul built that road while Augustus was princeps, it may be doubted without further corroboration that that proconsul really built the road, but would be harder to doubt that the road was built during the principate of Augusutus. If an advertisement informs readers that 'A and B Coffee may be bought at any reliable grocer's at the unusual price of fifty cents a pound,' all the inferences of the advertisement may well be doubted without corroboration except that there is a brand of coffee on the market called 'A and B Coffee.
Garraghan says that most information comes from "indirect witnesses," people who were not present on the scene but heard of the events from someone else.[6] Gottschalk says that a historian may sometimes use hearsay evidence. He writes, "In cases where he uses secondary witnesses, however, he does not rely upon them fully. On the contrary, he asks: (1) On whose primary testimony does the secondary witness base his statements? (2) Did the secondary witness accurately report the primary testimony as a whole? (3) If not, in what details did he accurately report the primary testimony? Satisfactory answers to the second and third questions may provide the historian with the whole or the gist of the primary testimony upon which the secondary witness may be his only means of knowledge. In such cases the secondary source is the historian's 'original' source, in the sense of being the 'origin' of his knowledge. Insofar as this 'original' source is an accurate report of primary testimony, he tests its credibility as he would that of the primary testimony itself."[7]
Oral tradition
Gilbert Garraghan maintains that oral tradition may be accepted if it satisfies either two "broad conditions" or six "particular conditions", as follows:
Broad conditions stated.
The tradition should be supported by an unbroken series of witnesses, reaching from the immediate and first reporter of the fact to the living mediate witness from whom we take it up, or to the one who was the first to commit it to writing.
There should be several parallel and independent series of witnesses testifying to the fact in question.
Particular conditions formulated.
The tradition must report a public event of importance, such as would necessarily be known directly to a great number of persons.
The tradition must have been generally believed, at least for a definite period of time.
During that definite period it must have gone without protest, even from persons interested in denying it.
The tradition must be one of relatively limited duration. [Elsewhere, Garraghan suggests a maximum limit of 150 years, at least in cultures that excel in oral remembrance.]
The critical spirit must have been sufficiently developed while the tradition lasted, and the necessary means of critical investigation must have been at hand.
Critical-minded persons who would surely have challenged the tradition — had they considered it false — must have made no such challenge.
Other methods of verifying oral tradition may exist, such as comparison with the evidence of archaeological remains.
More recent evidence concerning the potential reliability or unreliability of oral tradition has come out of fieldwork in West Africa and Eastern Europe.
Synthesis: historical reasoning
Once individual pieces of information have been assessed in context, hypotheses can be formed and established by historical reasoning.
Argument to the best explanation
C. Behan McCullagh lays down seven conditions for a successful argument to the best explanation:
The statement, together with other statements already held to be true, must imply yet other statements describing present, observable data. (We will henceforth call the first statement 'the hypothesis', and the statements describing observable data, 'observation statements'.)
The hypothesis must be of greater explanatory scope than any other incompatible hypothesis about the same subject; that is, it must imply a greater variety of observation statements.
The hypothesis must be of greater explanatory power than any other incompatible hypothesis about the same subject; that is, it must make the observation statements it implies more probable than any other.
The hypothesis must be more plausible than any other incompatible hypothesis about the same subject; that is, it must be implied to some degree by a greater variety of accepted truths than any other, and be implied more strongly than any other; and its probable negation must be implied by fewer beliefs, and implied less strongly than any other.
The hypothesis must include fewer new suppositions about the past which are not already implied to some extent by existing beliefs.
It must be disconfirmed by fewer accepted beliefs than any other incompatible hypothesis about the same subject; that is, when conjoined with accepted truths it must imply fewer observation statements and other statements which are believed to be false.
It must exceed other incompatible hypotheses about the same subject by so much, in characteristics 2 to 6, that there is little chance of an incompatible hypothesis, after further investigation, soon exceeding it in these respects.
McCullagh sums up, "if the scope and strength of an explanation are very great, so that it explains a large number and variety of facts, many more than any competing explanation, then it is likely to be true."















Social research
Social research refers to research conducted by social scientists (primarily within sociology and social psychology), but also within other disciplines such as social policy, human geography, political science, social anthropology and education. Sociologists and other social scientists study diverse things: from census data on hundreds of thousands of human beings, through the in-depth analysis of the life of a single important person to monitoring what is happening on a street today - or what was happening a few hundred years ago.
Social scientists use many different methods in order to describe, explore and understand social life. Social methods can generally be subdivided into two broad categories. Quantitative methods are concerned with attempts to quantify social phenomena and collect and analyse numerical data, and focus on the links among a smaller number of attributes across many cases. Qualitative methods, on the other hand, emphasise personal experiences and interpretation over quantification, are more concerned with understanding the meaning of social phenomena and focus on links among a larger number of attributes across relatively few cases. While very different in many aspects, both qualitative and quantitative approaches involve a systematic interaction between theories and data.
Common tools of quantitative researchers include surveys, questionnaires, and secondary analysis of statistical data that has been gathered for other purposes (for example, censuses or the results of social attitudes surveys). Commonly used qualitative methods include focus groups, participant observation, and other techniques.
Ordinary human inquiry
Before the advent of sociology and application of the scientific method to social research, human inquiry was mostly based on personal experiences, and received wisdom in the form of tradition and authority. Such approaches often led to errors such as inaccurate observations, overgeneralisation, selective observations, subjectivity and lack of logic.
Foundations of social research
Social research (and social science in general) is based on logic and empirical observations. Charles C. Ragin writes in his Constructing Social Research book that "Social research involved the interaction between ideas and evidence. Ideas help social researchers make sense of evidence, and researchers use evidence to extend, revise and test ideas". Social research thus attempts to create or validate theories through data collection and data analysis, and its goal is exploration, description and explanation. It should never lead or be mistaken with philosophy or belief. Social research aims to find social patterns of regularity in social life and usually deals with social groups (aggregates of individuals), not individuals themselves (although science of psychology is an exception here). Research can also be divided into pure research and applied research. Pure research has no application on real life, whereas applied research attempts to influence the real world.
There are no laws in social science that parallel the laws in the natural science. A law in social science is a universal generalization about a class of facts. A fact is an observed phenomenon, and observation means it has been seen, heard or otherwise experienced by researcher. A theory is a systematic explanation for the observations that relate to a particular aspect of social life. Concepts are the basic building blocks of theory and are abstract elements representing classes of phenomena. Axioms or postulates are basic assertions assumed to be true. Propositions are conclusions drawn about the relationships among concepts, based on analysis of axioms. Hypotheses are specified expectations about empirical reality which are derived from propositions. Social research involves testing these hypotheses to see if they are true.
Social research involves creating a theory, operationalization (measurement of variables) and observation (actual collection of data to test hypothesized relationship).
Social theories are written in the language of variables, in other words, theories describe logical relationships between variables. Variables are logical sets of attributes, with people being the 'carriers' of those variables (for example, gender can be a variable with two attributes: male and female). Variables are also divided into independent variables (data) that influences the dependent variables (which scientists are trying to explain). For example, in a study of how different dosages of a drug are related to the severity of symptoms of a disease, a measure of the severity of the symptoms of the disease is a dependent variable and the administration of the drug in specified doses is the independent variable. Researchers will compare the different values of the dependent variable (severity of the symptoms) and attempt to draw conclusions.
Types of explanations
Explanations in social theories can be idiographic or nomothetic. An idiographic approach to an explanation is one where the scientists seek to exhaust the idiosyncratic causes of a particular condition or event, i.e. by trying to provide all possible explanations of a particular case. Nomothetic explanations tend to be more general with scientists trying to identify a few causal factors that impact a wide class of conditions or events. For example, when dealing with the problem of how people choose a job, idiographic explanation would be to list all possible reasons why a given person (or group) chooses a given job, while nomothetic explanation would try to find factors that determine why job applicants in general choose a given job.
Types of inquiry
Social research can be deductive or inductive. The inductive inquiry (also known as grounded research) is a model in which general principles (theories) are developed from specific observations. In deductive inquiry specific expectations of hypothesis are developed on the basis of general principles (i.e. social scientists start from an existing theory, and then search for proof). For example, in inductive research, if a scientist finds that some specific religious minorities tend to favor a specific political view, he may then extrapolate this to the hypothesis that all religious minorities tend to have the same political view. In deductive research, a scientist would start from a hypothesis that religious affiliation influenced political views and then begin observations to prove or disprove this hypothesis.
Quantitative / qualitative debate
There is usually a trade off between the number of cases and the number of their variables that social research can study. Qualitative research usually involves few cases with many variables, while quantitative involves many phenomena with few variables.
There is some debate over whether "quantitative research" and "qualitative research" methods can be complementary: some researchers argue that combining the two approaches is beneficial and helps build a more complete picture of the social world, while other researchers believe that the epistemologies that underpin each of the approaches are so divergent that they cannot be reconciled within a research project.
While quantitative methods are based on a natural science, positivist model of testing theory, qualitative methods are based on interpretivism and are more focused around generating theories and accounts. Positivists treat the social world as something that is 'out there', external to the social scientist and waiting to be researched. Interpretivists, on the other hand believe that the social world is constructed by social agency and therefore any intervention by a researcher will affect social reality. Herein lies the supposed conflict between quantitative and qualitative approaches - quantitative approaches traditionally seek to minimise intervention in order to produce valid and reliable statistics, whereas qualitative approaches traditionally treat intervention as something that is necessary (often arguing that participation can lead to a better understanding of a social situation).
However, it is increasingly recognised that the significance of these differences should not be exaggerated and that quantitative and qualitative approaches can be complementary. They can be combined in a number of ways, for example:
Qualitative methods can be used in order to develop quantitative research tools. For example, focus groups could be used to explore an issue with a small number of people and the data gathered using this method could then be used to develop a quantitative survey questionnaire that could be administered to a far greater number of people allowing results to be generalised.
Qualitative methods can be used to explore and facilitate the interpretation of relationships between variables. For example researchers may inductively hypothesize that there would be a positive relationship between positive attitudes of sales staff and the amount of sales of a store. However, quantitative, deductive, structured observation of 576 convenience stores could reveal that this was not the case, and in order to understand why the relationship between the variables was negative the researchers may undertake qualitative case studies of four stores including participant observation. This might abductively confirm that the relationship was negative, but that it was not the positive attitude of sales staff that led to low sales, but rather that high sales led to busy staff who were less likely to be express positive emotions at work![1]
Quantitative methods are useful for describing social phenomena, especially on a larger scale. Qualitative methods allow social scientists to provide richer explanations (and descriptions) of social phenomena, frequently on a smaller scale. By using two or more approaches researchers may be able to 'triangulate' their findings and provide a more valid representation of the social world.
A combination of different methods are often used within "comparative research", which involves the study of social processes across nation-states, or across different types of society.
Paradigms
Social scientists usually follow one or more of the several specific sociological paradigms (points of view):
conflict paradigm focuses on the ability of some groups to dominate others, or resistance to such domination.
ethnomethodology paradigm examines how people make sense out of social life in the process of living it, as if each was a researcher engaged in enquiry.
feminist paradigm focuses on how male dominance of society has shaped social life.
Darwinist paradigm sees a progressive evolution in social life.
positivist paradigm was an early 19th century approach, now considered obsolete in its pure form. Positivists believed we can scientifically discover all the rules governing social life.
structural functionalist paradigm also known as a social systems paradigm addresses what functions various elements of the social system perform in regard to the entire system.
symbolic interactionist paradigm examines how shared meanings and social patterns are developed in the course of social interactions.
Of these, the conflict paradigm of Karl Marx, the interactionism of Max Weber and George Herbert Mead, and the structural functionalism of Talcott Parsons are the most well known.
The ethics of social research
The primary assumptions of the ethics in social research are:
voluntary participation
no harm to subjects
integrity
PAC: Privacy, anonymity and confidentiality
Social research techniques
Quantitative methods
structured interview
statistical surveys and questionnaires
structured observation
content analysis
secondary analysis
Quantitative marketing research
Qualitative methods
analytic induction
ethnography
focus groups
morphological analysis
participant observation
semi-structured interview
unstructured interview
textual analysis
theoretical sampling
Quantitative research
Quantitative research is the systematic scientific investigation of quantitative properties and phenomena and their relationships. The objective of quantitative research is to develop and employ mathematical models, theories and/or hypotheses pertaining to natural phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships.
Quantitative research is widely used in both the natural sciences and social sciences, from physics and biology to sociology and journalism. It is also used as a way to research different aspects of education. The term quantitative research is most often used in the social sciences in contrast to qualitative research.
Quantitative research is generally made using scientific methods, which can include:
· The generation of models, theories and hypotheses
· The development of instruments and methods for measurement
· Experimental control and manipulation of variables
· Collection of empirical data
· Modeling and analysis of data
· Evaluation of results
Quantitative research is often an iterative process whereby evidence is evaluated, theories and hypothieses are refined, technical advances are made, and so on. Virtually all research in physics is quantitative whereas research in other scientific disciplines, such as taxonomy and anatomy, may involve a combination of quantitative and other analytic approaches and methods. D Pattni describes quantitative research as a very powerful tool for organisations.
In the social sciences particularly, quantitative research is often contrasted with qualitative research which is the examination, analysis and interpretation of observations for the purpose of discovering underlying meanings and patterns of relationships, including classifications of types of phenomena and entities, in a manner that does not involve mathematical models. Approaches to quantitative psychology were first modelled on quantitative approaches in the physical sciences by Gustav Fechner in his work on psychophysics, which built on the work of Ernst Heinrich Weber. Although a distinction is commonly drawn between qualitative and quantitative aspects of scientific investigation, it has been argued that the two go hand in hand. For example, based on analysis of the history of science, Kuhn (1961, p. 162) concludes that “large amounts of qualitative work have usually been prerequisite to fruitful quantification in the physical sciences”[1]. Qualitative research is often used to gain a general sense of phenomena and to form theories that can be tested using further quantitative research. For instance, in the social sciences qualitative research methods are often used to gain better understanding of such things as intentionality (from the speech response of the researchee) and meaning (why did this person/group say something and what did it mean to them?).
Although quantitative investigation of the world has existed since people first began to record events or objects that had been counted, the modern idea of quantitative processes have their roots in Auguste Comte's positivist framework..
Statistics in quantitative research
Statistics is the most widely used branch of mathematics in quantitative research outside of the physical sciences, and also finds applications within the physical sciences, such as in statistical mechanics. Statistical methods are used extensively within fields such as economics, social sciences and biology. Quantitative research using statistical methods typically begins with the collection of data based on a theory or hypothesis, followed by the application of descriptive or inferential statistical methods. Typically, a very large volume of data is collected, which requires validating, verifying and recoding before analysis. Software packages such as PSPP and R are typically used for this purpose. Causal relationships are studied by manipulating factors thought to influence the phenomena of interest while controlling other variables relevant to the experimental outcomes. In the field of health, for example, researchers might measure and study the relationship between dietary intake and measurable physiological effects such as weight loss, controlling for other key variables such as exercise. Quantitatively based opinion surveys are widely used in the media, with statistics such as the proportion of respondents in favor of a position commonly reported. In opinion surveys, respondents are asked a set of structured questions and their responses are tabulated. In the field of climate science, researchers compile and compare statistics such as temperature or atmospheric concentrations of carbon dioxide.
Empirical relationships and associations are also frequently studied by using some form of General linear model, non-linear model, or by using factor analysis. A fundamental principle in quantitative research is that correlation does not imply causation. This principle follows from the fact that it is always possible a spurious relationship exists for variables between which covariance is found in some degree. Associations may be examined between any combination of continuous and categorical variables using methods of statistics.
Measurement in quantitative research
Views regarding the role of measurement in quantitative research are somewhat divergent. Measurement is often regarded as being only a means by which observations are expressed numerically in order to investigate causal relations or associations. However, it has been argued that measurement often plays a more important role in quantitative research. For example, Thomas Kuhn (1961) argued that results which appear anomalous in the context of accepted theory potentially lead to the genesis of a search for a new, natural phenomenon. He believed that such anomalies are most striking when encountered during the process of obtaining measurements, as reflected in the following observations regarding the function of measurement in science:
When measurement departs from theory, it is likely to yield mere numbers, and their very neutrality makes them particularly sterile as a source of remedial suggestions. But numbers register the departure from theory with an authority and finesse that no qualitative technique can duplicate, and that departure is often enough to start a search (Kuhn, 1961, p. 180).
In classical physics, the theory and definitions which underpin measurement are generally deterministic in nature. In contrast, probabilistic measurement models known as the Rasch model and Item response theory models are generally employed in the social sciences. Psychometrics is the field of study concerned with the theory and technique for measuring social and psychological attributes and phenomena. This field is central to much quantitative research that is undertaken within the social sciences.
Quantitative research may involve the use of proxies as stand-ins for other quantities that cannot be directly measured. Tree-ring width, for example, is considered a reliable proxy of ambient environmental conditions such as the warmth of growing seasons or amount of rainfall. Although scientists cannot directly measure the temperature of past years, tree-ring width and other climate proxies have been used to provide a semi-quantitative record of average temperature in the Northern Hemisphere back to 1000 A.D. When used in this way, the proxy record (tree ring width, say) only reconstructs a certain amount of the variance of the original record. The proxy may be calibrated (for example, during the period of the instrumental record) to determine how much variation is captured, including whether both short and long term variation is revealed. In the case of tree-ring width, different species in different places may show more or less sensitivity to, say, rainfall or temperature: when reconstructing a temperature record there is considerable skill in selecting proxies that are well correlated with the desired variable.
Quantitative methods
Quantitative methods are research techniques that are used to gather quantitative data - information dealing with numbers and anything that is measurable. Statistics, tables and graphs, are often used to present the results of these methods. They are therefore to be distinguished from qualitative methods.
In most physical and biological sciences, the use of either quantitative or qualitative methods is uncontroversial, and each is used when appropriate. In the social sciences, particularly in sociology, social anthropology and psychology, the use of one or other type of method has become a matter of controversy and even ideology, with particular schools of thought within each discipline favouring one type of method and pouring scorn on to the other. Advocates of quantitative methods argue that only by using such methods can the social sciences become truly scientific; advocates of qualitative methods argue that quantitative methods tend to obscure the reality of the social phenomena under study because they underestimate or neglect the non-measurable factors, which may be the most important. The modern tendency (and in reality the majority tendency throughout the history of social science) is to use eclectic approaches. Quantitative methods might be used with a global qualitative frame. Qualitative methods might be used to understand the meaning of the numbers produced by quantitative methods. Using quantitative methods, it is possible to give precise and testable expression to qualitative ideas. This combination of quantitative and qualitative data gathering is often referred to as mixed-methods research
Examples of quantitative research
Research that consists of the percentage amounts of all the elements that make up Earth's atmosphere.
Survey that concludes that the average patient has to wait two hours in the waiting room of a certain doctor before being selected.
An experiment in which group x was given two tablets of Aspirin a day and Group y was given two tablets of a placebo a day where each participant is randomly assigned to one or other of the groups.
The numerical factors such as two tablets, percent of elements and the time of waiting make the situations and results quantitative.
Qualitative research
Qualitative research is a field of inquiry that crosscuts disciplines and subject matters. Qualitative researchers aim to gather an in-depth understanding of human behavior and the reasons that govern such behavior. The discipline investigates the why and how of decision making, not just what, where, when. Hence, smaller but focused samples are more often needed rather than large random samples.
History
Qualitative research was one of the first forms of social studies (conducted e.g. by Bronisław Malinowski or Elton Mayo), but in the 1950s and 1960s - as quantitative science reached its peak of popularity (The Quantitative Revolution)- it was diminished in importance and began to regain recognition only in the 1970s. The phrase 'qualitative research' was until then restricted as a discipline of anthropology or sociology, and terms like ethnography, fieldwork, participant observation and Chicago school (sociology) were used instead. During the 1970s and 1980s qualitative research began to be used in other disciplines, and became a significant type of research in the fields of education studies, social work studies, women's studies, disability studies, information studies, management studies, nursing service studies, human service studies, psychology, communication studies, and other. Qualitative research occurred in the consumer products industry during this period: researchers most interested in investigating consumer new product and product positioning opportunities worked with a handful of the earliest consumer research pioneers including Gene Reilly of The Gene Reilly Group in Darien, CT, Jerry Schoenfeld of Gerald Schoenfeld & Partners in Tarrytown, NY and Martin Calle of Calle & Company, Greenwich, CT, also Peter Cooper in London, England, and Hugh Mackay in Sydney, Australia. In the late 1980s and 1990s after a spate of criticisms from the quantitative side, paralleling a slowdown in traditional media spending for the decade, new methods of qualitative research evolved, to address the perceived problems with reliability and imprecise modes of data analysis.[2]
In the last thirty years the acceptance of qualitative research by journal publishers and editors has been growing. Prior to that time many mainstream journals were prone to publish research articles based upon the natural sciences and which featured quantitative analysis [3].
Distinctions from quantitative research
Random but purposive in design. In other words, cases can be selected according to whether they typify, or not, certain characteristics or contextual locations. Secondly, the role or position of the researcher is given greater critical attention. This is because in qualitative research the possibility of the researcher taking a 'neutral' or transcendental position is seen as more problematic in practical and/or philosophical terms. Hence qualitative researchers are often exhorted to reflect on their role in the research process and make this clear in the analysis. Thirdly, while qualitative data analysis can take a wide variety of forms it tends to differ from quantitative research in the focus on language, signs and meaning as well as approaches to analysis that are holistic and contextual, rather than reductionist and isolationist. Nevertheless, systematic and transparent approaches to analysis are almost always regarded as essential for rigor. For example, many qualitative methods require researchers to carefully code data and discern and document themes in a consistent and reliable way.
Perhaps the most traditional division in the way qualitative and quantitative research have been used in the social sciences is for qualitative methods to be used for exploratory (i.e., hypothesis-generating) purposes or explaining puzzling quantitative results, while quantitative methods are used to test hypotheses. This is because establishing content validity - do measures measure what a researcher thinks they measure? - is seen as one of the strengths of qualitative research. While quantitative methods are seen as providing more representative, reliable and precise measures through focused hypotheses, measurement tools and applied mathematics. By contrast, qualitative data is usually difficult to graph or display in mathematical terms.
Qualitative research is often used for policy and program evaluation research since it can answer certain important questions more efficiently and effectively than quantitative approaches. This is particularly the case for understanding how and why certain outcomes were achieved (not just what was achieved) but also answering important questions about relevance, unintended effects and impact of programs such as: Were expectations reasonable? Did processes operate as expected? Were key players able to carry out their duties? Were there any unintended effects of the program? Qualitative approaches have the advantage of allowing for more diversity in responses as well as the capacity to adapt to new developments or issues during the research process itself. While qualitative research can be expensive and time-consuming to conduct, many fields of research employ qualitative techniques that have been specifically developed to provide more succinct, cost-efficient and timely results. Rapid Rural Appraisal is one formalised example of these adaptations but there are many others.
Data Collection
Qualitative researchers may use different approaches in collecting data, such as the grounded theory practice, narratology, storytelling, classical ethnography, or shadowing. Qualitative methods are also loosely present in other methodological approaches, such as action research or actor-network theory. Forms of the data collected can include interviews and group discussions, observation and reflection field notes, various texts, pictures, and other materials.
Qualitative research often categorizes data into patterns as the primary basis for organizing and reporting results.[citation needed] Qualitative researchers typically rely on the following methods for gathering information: Participant Observation, Non-participant Observation, Field Notes, Reflexive Journals, Structured Interview, Unstructured Interview, Analysis of documents and materials [4].
The ways of participating and observing can vary widely from setting to setting. Participant observation is a strategy of reflexive learning, not a single method of observing[5]. In participant observation [1] researchers typically become members of a culture, group, or setting, and adopt roles to conform to that setting. In doing so, the aim is for the researcher to gain a closer insight into the culture's practices, motivations and emotions. It is argued that the researchers' ability to understand the experiences of the culture may be inhibited if they observe without participating.
Some distinctive qualitative methods are the use of focus groups and key informant interviews. The focus group technique involves a moderator facilitating a small group discussion between selected individuals on a particular topic. This is a particularly popular method in market research and testing new initiatives with users/workers.
One traditional and specialized form of qualitative research is called cognitive testing or pilot testing which is used in the development of quantitative survey items. Survey items are piloted on study participants to test the reliability and validity of the items.
In the academic social sciences the most frequently used qualitative research approaches include:
1. Ethnographic Research, used for investigating cultures by collecting and describing data that is intended to help in the development of a theory. This method is also called “ethnomethodology” or "methodology of the people". An example of applied ethnographic research, is the study of a particular culture and their understanding of the role of a particular disease in their cultural framework.
2. Critical Social Research, used by a researcher to understand how people communicate and develop symbolic meanings.
3. Ethical Inquiry, an intellectual analysis of ethical problems. It includes the study of ethics as related to obligation, rights, duty, right and wrong, choice etc.
4. Foundational Research, examines the foundations for a science, analyses the beliefs and develops ways to specify how a knowledge base should change in light of new information.
5. Historical Research, allows one to discuss past and present events in the context of the present condition, and allows one to reflect and provide possible answers to current issues and problems. Historical research helps us in answering questions such as: Where have we come from, where are we, who are we now and where are we going?
6. Grounded Theory, is an inductive type of research, based or “grounded” in the observations or data from which it was developed; it uses a variety of data sources, including quantitative data, review of records, interviews, observation and surveys.
7. Phenomenological Research, describes the “subjective reality” of an event, as perceived by the study population; it is the study of a phenomenon.
8. Philosophical Research, is conducted by field experts within the boundaries of a specific field of study or profession, the best qualified individual in any field of study to use an intellectual analyses, in order to clarify definitions, identify ethics, or make a value judgment concerning an issue in their field of study.
Data analysis
Interpretive techniques
The most common analysis of qualitative data is observer impression. That is, expert or layman observers examine the data, interpret it via forming an impression and report their impression in a structured and sometimes quantitative form.
Coding
Coding is an interpretive technique that both organizes the data and provides a means to introduce the interpretations of it into certain quantitative methods. Most coding requires the analyst to read the data and demarcate segments within it. Each segment is labeled with a “code” – usually a word or short phrase that suggests how the associated data segments inform the research objectives. When coding is complete, the analyst prepares reports via a mix of: summarizing the prevalence of codes, discussing similarities and differences in related codes across distinct original sources/contexts, or comparing the relationship between one or more codes.
Some qualitative data that is highly structured (e.g., open-end responses from surveys or tightly defined interview questions) is typically coded without additional segmenting of the content. In these cases, codes are often applied as a layer on top of the data. Quantitative analysis of these codes is typically the capstone analytical step for this type of qualitative data.
Contemporary qualitative data analyses are sometimes supported by computer programs. These programs do not supplant the interpretive nature of coding but rather are aimed at enhancing the analyst’s efficiency at data storage/retrieval and at applying the codes to the data. Many programs offer efficiencies in editing and revising coding, which allow for work sharing, peer review, and recursive examination of data.
A frequent criticism of coding method is that it seeks to transform qualitative data into quantitative data, thereby draining the data of its variety, richness, and individual character. Analysts respond to this criticism by thoroughly expositing their definitions of codes and linking those codes soundly to the underlying data, therein bringing back some of the richness that might be absent from a mere list of codes.
Recursive abstraction
Some qualitative datasets are analyzed without coding. A common method here is recursive abstraction, where datasets are summarized, those summaries are then further summarized, and so on. The end result is a more compact summary that would have been difficult to accurately discern without the preceding steps of distillation.
A frequent criticism of recursive abstraction is that the final conclusions are several times removed from the underlying data. While it is true that poor initial summaries will certainly yield an inaccurate final report, qualitative analysts can respond to this criticism. They do so, like those using coding method, by documenting the reasoning behind each summary step, citing examples from the data where statements were included and where statements were excluded from the intermediate summary.
Mechanical techniques
Some techniques rely on leveraging computers to scan and sort large sets of qualitative data. At their most basic level, mechanical techniques rely on counting words, phrases, or coincidences of tokens within the data. Often referred to as content analysis, the output from these techniques is amenable to many advanced statistical analyses.
Mechanical techniques are particularly well-suited for a few scenarios. One such scenario is for datasets that are simply too large for a human to effectively analyze, or where analysis of them would be cost prohibitive relative to the value of information they contain. Another scenario is when the chief value of a dataset is the extent to which it contains “red flags” (e.g., searching for reports of certain adverse events within a lengthy journal dataset from patients in a clinical trial) or “green flags” (e.g., searching for mentions of your brand in positive reviews of marketplace products).
A frequent criticism of mechanical techniques is the absence of a human interpreter. And while masters of these methods are able to write sophisticated software to mimic some human decisions, the bulk of the “analysis” is nonhuman. Analysts respond by proving the value of their methods relative to either a) hiring and training a human team to analyze the data or b) letting the data go untouched, leaving any actionable nuggets undiscovered.
Paradigmatic differences
Contemporary qualitative research has been conducted from a large number of various paradigms that influence conceptual and metatheoretical concerns of legitimacy, control, data analysis, ontology, and epistemology, among others. Research conducted in the last 10 years has been characterized by a distinct turn toward more interpretive, postmodern, and critical practices[6]. Guba and Lincoln (2005) identify five main paradigms of contemporary qualitative research: positivism, postpositivism, critical theories, constructivism, and participatory/cooperative paradigms[7]. Each of the paradigms listed by Guba and Lincoln are characterized by axiomatic differences in axiology, intended action of research, control of research process/outcomes, relationship to foundations of truth and knowledge, validity (see below), textual representation and voice of the researcher/participants, and commensurability with other paradigms. In particular, commensurability involves the extent to which paradigmatic concerns “can be retrofitted to each other in ways that make the simultaneous practice of both possible”[8]. Positivist and postpositivist paradigms share commensurable assumptions but are largely incommensurable with critical, constructivist, and participatory paradigms. Likewise, critical, constructivist, and participatory paradigms are commensurable on certain issues (e.g., intended action and textual representation).
Validation
One of the central issues in qualitative research is validity (also known as credibility and/or dependability). There are many different ways of establishing validity, including: member check, interviewer corroboration, peer debriefing, prolonged engagement, negative case analysis, auditability, confirmability, bracketing, and balance. Most of these methods were coined, or at least extensively described by Lincoln and Guba (1985)[9]
Academic research
By the end of the 1970’s many leading journals began to publish qualitative research articles [10] and several new journals emerged which published only qualitative research studies and articles about qualitative research methods [11].
In the 1980’s and 1990’s, the new qualitative research journals became more multidisciplinary in focus moving beyond qualitative research’s traditional disciplinary roots of anthropology, sociology, and philosophy [12].
The new millennium saw a dramatic increase in the number of journals specializing in qualitative research publications with at least one new qualitative research journal being launched each year.
Deductive reasoning
Deductive reasoning, sometimes called deductive logic, is reasoning which constructs or evaluates deductive arguments. In logic, an argument is said to be deductive when the truth of the conclusion is purported to follow necessarily or be a logical consequence of the premises and (consequently) its corresponding conditional is a necessary truth. Deductive arguments are said to be valid or invalid, never true or false. A deductive argument is valid if and only if the truth of the conclusion actually does follow necessarily (or is indeed a logical consequence of) the premises and (consequently) its corresponding conditional is a necessary truth. If a deductive argument is not valid then it is invalid. A valid deductive argument with true premises is said to be sound; a deductive argument which is invalid or has one or more false premises or both is said to be not sound (unsound).
An example of a deductive argument and hence of deductive reasoning:
All men are mortal
Socrates is a man
(Therefore,) Socrates is mortal
Deductive reasoning is sometimes contrasted with inductive reasoning.
Deductive logic
An argument is valid when it is impossible for both its premises to be true and its conclusion to be false. An argument can be valid even though the premises are false. Note, for example, that the conclusion of the following argument would have to be true if the premises were true, (even though they are, in fact, false):
→All fire-breathing rabbits live on Earth
→All humans are fire-breathing rabbits
→(Therefore,) all humans live on Earth
The argument, however, is not sound. In order for a deductive argument to be sound, it must not only be valid, the premises must be true as well.
A theory of deductive reasoning known as categorical or term logic was developed by Aristotle but was superseded by propositional (sentential) logic and predicate logic.
Deductive reasoning is sometimes contrasted with inductive reasoning. By thinking about phenomena such as how apples fall and how the planets move, Isaac Newton induced his theory of gravity. In the 19th century, Adams and LeVerrier applied Newton's theory (general principle) to deduce the existence, mass, position, and orbit of Neptune (specific conclusions) from perturbations in the observed orbit of Uranus (specific data).
Natural deduction
Natural deduction
Deductive reasoning should be distinguished from the related concept of natural deduction, an approach to proof theory that attempts to provide a formal model of logical reasoning as it "naturally" occurs.
Inductive reasoning
Induction or inductive reasoning, sometimes called inductive logic, is reasoning which takes us "beyond the confines of our current evidence or knowledge to conclusions about the unknown."[1] The premises of an inductive argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; i.e. they do not ensure its truth. Induction is used to ascribe properties or relations to types based on an observation instance (i.e., on a number of observations or experiences); or to formulate laws based on limited observations of recurring phenomenal patterns. Induction is employed, for example, in using specific propositions such as:
This ice is cold. (or: All ice I have ever touched was cold.)
This billiard ball moves when struck with a cue. (or: Of one hundred billiard balls struck with a cue, all of them moved.)
...to infer general propositions such as:
All ice is cold.
All billiard balls move when struck with a cue.
Another example would be:
3+5=8 and eight is an even number. Therefore, an odd number added to another odd number will result in an even number.
Inductive reasoning has been attacked several times. Historically, David Hume denied its logical admissibility. Sextus Empiricus questioned how the truth of the Universals can be established by examining some of the particulars. Examining all the particulars is difficult as they are infinite in number.[2] During the twentieth century, thinkers such as Karl Popper and David Miller have disputed the existence, necessity and validity of any inductive reasoning, including probabilistic (Bayesian) reasoning.[3] Some say scientists still rely on induction but Popper and Miller dispute this: Scientists cannot rely on induction simply because it does not exist.
Note that mathematical induction is not a form of inductive reasoning. While mathematical induction may be inspired by the non-base cases, the formulation of a base case firmly establishes it as a form of deductive reasoning.
Strong and weak induction
Strong induction
All observed crows are black.
Therefore:
All crows are black.
This exemplifies the nature of induction: inducing the universal from the particular. However, the conclusion is not certain. Unless we can systematically falsify the possibility of crows of another colour, the statement (conclusion) may actually be false.
For example, one could examine the bird's genome and learn whether it is capable of producing a differently coloured bird. In doing so, we could discover that albinism is possible, resulting in light-coloured crows. Even if you change the definition of "crow" to require blackness, the original question of the colour possibilities for a bird of that species would stand, only semantically hidden.
A strong induction is thus an argument in which the truth of the premises would make the truth of the conclusion probable, but not necessary.
Weak induction
I always hang pictures on nails.
Therefore:
All pictures hang from nails.
Assuming the first statement to be true, this example is built on the certainty that "I always hang pictures on nails" leading to the generalisation that "All pictures hang from nails". However, the link between the premise and the inductive conclusion is weak. No reason exists to believe that just because one person hangs pictures on nails that there are no other ways for pictures to be hung, or that other people cannot do other things with pictures. Indeed, not all pictures are hung from nails; moreover, not all pictures are hung. The conclusion cannot be strongly inductively made from the premise. Using other knowledge we can easily see that this example of induction would lead us to a clearly false conclusion. Conclusions drawn in this manner are usually overgeneralisations.
Many speeding tickets are given to teenagers.
Therefore:
All teenagers drive fast.
In this example, the premise is built upon a certainty; however, it is not one that leads to the conclusion. Not every teenager observed has been given a speeding ticket. In other words, unlike "The sun rises every morning", there are already plenty of examples of teenagers not being given speeding tickets. Therefore the conclusion drawn is false. Moreover, when the link is weak, the inductive logic does not give us a strong conclusion. In both of these examples of weak induction, the logical means of connecting the premise and conclusion (with the word "therefore") are faulty, and do not give us a strong inductively reasoned statement.
Validity
Problem of induction
Formal logic, as most people learn it, is deductive rather than inductive. Some philosophers[who?] claim to have created systems of inductive logic, but it is controversial whether a logic of induction is even possible. In contrast to deductive reasoning, conclusions arrived at by inductive reasoning do not have the same degree of certainty as the initial premises. For example, a conclusion that all swans are white is false, but may have been thought true in Europe until the settlement of Australia or New Zealand, when Black Swans were discovered. Inductive arguments are never binding but they may be cogent. Inductive reasoning is deductively invalid. (An argument in formal logic is valid if and only if it is not possible for the premises of the argument to be true whilst the conclusion is false.) In induction there are always many conclusions that can reasonably be related to certain premises. Inductions are open; deductions are closed. It is however possible to derive a true statement using inductive reasoning if you know the conclusion. The only way to have an efficient argument by induction is for the known conclusion to be able to be true only if an unstated external conclusion is true, from which the initial conclusion was built and has certain criteria to be met in order to be true (separate from the stated conclusion). By substitution of one conclusion for the other, you can inductively find out what evidence you need in order for your induction to be true. For example, you have a window that opens only one way, but not the other. Assuming that you know that the only way for that to happen is that the hinges are faulty, inductively you can postulate that the only way for that window to be fixed would be to apply oil (whatever will fix the unstated conclusion). From there on you can successfully build your case. However, if your unstated conclusion is false, which can only be proven by deductive reasoning, then your whole argument by induction collapses. Thus ultimately, inductive reasoning is not reliable.
The classic philosophical treatment of the problem of induction, meaning the search for a justification for inductive reasoning, was by the Scottish philosopher David Hume. Hume highlighted the fact that our everyday reasoning depends on patterns of repeated experience rather than deductively valid arguments. For example, we believe that bread will nourish us because it has done so in the past, but this is not a guarantee that it will always do so. As Hume said, someone who insisted on sound deductive justifications for everything would starve to death.
Instead of approaching everything with severe skepticism, Hume advocated a practical skepticism based on common sense, where the inevitability of induction is accepted.
Induction is sometimes framed as reasoning about the future from the past, but in its broadest sense it involves reaching conclusions about unobserved things on the basis of what has been observed. Inferences about the past from present evidence – for instance, as in archaeology, count as induction. Induction could also be across space rather than time, for instance as in physical cosmology where conclusions about the whole universe are drawn from the limited perspective we are able to observe (see cosmic variance); or in economics, where national economic policy is derived from local economic performance.
Twentieth-century philosophy has approached induction very differently. Rather than a choice about what predictions to make about the future, induction can be seen as a choice of what concepts to fit to observations or of how to graph or represent a set of observed data. Nelson Goodman posed a "new riddle of induction" by inventing the property "grue" to which induction as a prediction about the future does not apply.
Types of inductive reasoning
Generalization
A generalization (more accurately, an inductive generalization) proceeds from a premise about a sample to a conclusion about the population.
The proportion Q of the sample has attribute A.
Therefore:
The proportion Q of the population has attribute A.
How great the support which the premises provide for the conclusion is dependent on (a) the number of individuals in the sample group compared to the number in the population; and (b) the randomness of the sample. The hasty generalisation and biased sample are fallacies related to generalisation
Statistical syllogism
A statistical syllogism proceeds from a generalization to a conclusion about an individual.
A proportion Q of population P has attribute A.
An individual I is a member of P.
Therefore:
There is a probability which corresponds to Q that I has A.
The proportion in the first premise would be something like "3/5ths of", "all", "few", etc. Two dicto simpliciter fallacies can occur in statistical syllogisms: "accident" and "converse accident".
Simple induction
Simple induction proceeds from a premise about a sample group to a conclusion about another individual.
Proportion Q of the known instances of population P has attribute A.
Individual I is another member of P.
Therefore:
There is a probability corresponding to Q that I has A.
This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.
Argument from analogy
Main article: False analogy
An argument from analogy has the following form:
I has attributes A, B, and C
J has attributes A and B
So, J has attribute C
An analogy relies on the inference that the attributes known to be shared (the similarities) imply that C is also a shared property. The support which the premises provide for the conclusion is dependent upon the relevance and number of the similarities between I and J. The fallacy related to this process is false analogy. As with other forms of inductive argument, even the best reasoning in an argument from analogy can only make the conclusion probable given the truth of the premises, not certain.
Analogical reasoning is very frequent in common sense, science, philosophy and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning. For more information on inferences by analogy, see Juthe, 2005.
Causal inference
A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.
Prediction
A prediction draws a conclusion about a future individual from a past sample.
Proportion Q of observed members of group G have had attribute A.
Therefore:
There is a probability corresponding to Q that other members of group G will have attribute A when next observed.
Bayesian inference
Of the candidate systems for an inductive logic, the most influential is Bayesianism. This uses probability theory as the framework for induction. Given new evidence, Bayes' theorem is used to evaluate how much the strength of a belief in a hypothesis should change.
There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.
Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or prior probabilities; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy – a generalization of the principle of indifference – and transformation groups are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.
Cox's theorem, which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic.
Epistemological Probability and Induction
Based on an analysis of measurement theory (specifically the axiomatic work of Krantz-Luce-Suppes-Tversky), Henry E. Kyburg, Jr. produced a novel account of how error and predictiveness could be mediated by epistemological probability. It explains how one can adopt a rule, such as PV=nRT, even though the new universal generalization produces higher error rates on the measurement of P, V, and T. It remains the most detailed procedural account of induction, in the sense of scientific theory-formation.
Epistemology
Epistemology (from Greek ἐπιστήμη - episteme-, "knowledge, science" + λόγος, "logos") or theory of knowledge is the branch of philosophy concerned with the nature and scope (limitations) of knowledge.[1] It addresses the questions:
What is knowledge?
How is knowledge acquired?
What do people know?
How do we know what we know?
Why do we know what we know?
Much of the debate in this field has focused on analyzing the nature of knowledge and how it relates to similar notions such as truth, belief, and justification. It also deals with the means of production of knowledge, as well as skepticism about different knowledge claims.
The term was introduced into English by the Scottish philosopher James Frederick Ferrier (1808–1864).[2]
Knowledge

Distinguishing knowing that from knowing how


Is knowledge a subset of that which is both true and believed? (See below)
In this article, and in epistemology in general, the kind of knowledge usually discussed is propositional knowledge, also known as "knowledge-that" as opposed to "knowledge-how." For example: in mathematics, it is known that 2 + 2 = 4, but there is also knowing how to add two numbers. Many (but not all) philosophers therefore think there is an important distinction between "knowing that" and "knowing how", with epistemology primarily interested in the former. This distinction is recognized linguistically in many languages, though not in modern Standard English (N.B. some languages related to English still do retain these verbs, as in Scots: "wit" and "ken").[3]
In Personal Knowledge, Michael Polanyi articulates a case for the epistemological relevance of both forms of knowledge; using the example of the act of balance involved in riding a bicycle, he suggests that the theoretical knowledge of the physics involved in maintaining a state of balance cannot substitute for the practical knowledge of how to ride, and that it is important to understand how both are established and grounded.
In recent times, some epistemologists (Sosa, Greco, Kvanvig, Zagzebski) have argued that we should not think of knowledge this way.[citation needed] Epistemology should evaluate people's properties (i.e., intellectual virtues) instead of propositions' properties. This is, in short, because higher forms of cognitive success (i.e., understanding) involve features that can't be evaluated from a justified true belief view of knowledge.
Belief
Often, statements of "belief" mean that the speaker predicts something that will prove to be useful or successful in some sense—perhaps the speaker might "believe in" his or her favorite football team. This is not the kind of belief usually addressed within epistemology. The kind that is dealt with is when "to believe something" simply means any cognitive content held as true. For example, to believe that the sky is blue is to think that the proposition "The sky is blue" is true.
Knowledge entails belief, so the statement, "I know the sky is blue, but I don't believe it", is self-contradictory (see Moore's paradox). On the other hand, knowledge about a belief does not entail an endorsement of its truth. For example, "I know about astrology, but I don't believe in it" is perfectly acceptable. It is also possible that someone believes in astrology but knows virtually nothing about it.
Belief is a subjective personal basis for individual behavior, while truth is an objective state independent of the individual. On occasion, knowledge and belief can conflict producing "cognitive dissonance

Criteria of truth
Whether someone's belief is true is not a prerequisite for someone to believe it. On the other hand, if something is actually known, then it categorically cannot be false. For example, a person believes that a particular bridge is safe enough to support them, and attempts to cross it; unfortunately, the bridge collapses under their weight. It could be said that they believed that the bridge was safe, but that this belief was mistaken. It would not be accurate to say that they knew that the bridge was safe, because plainly it was not. By contrast, if the bridge actually supported their weight then they might be justified in subsequently holding that he knew the bridge had been safe enough for his passage, at least at that particular time. For something to count as knowledge, it must actually be true.
The Aristotelian definition of truth states:
"To say of something which is that it is not, or to say of something which is not that it is, is false. However, to say of something which is that it is, or of something which is not that it is not, is true."