ROLES OF STATISTICAL ANALYSIS CONSULTING FIRMS / COMPANY

ROLES OF STATISTICAL ANALYSIS CONSULTING FIRMS / COMPANY

Statistical analysis consulting firms / company, as a third party to help researchers or students gaining a master’s degree or doctoral degree. It has multiple roles and tasks as well as restrictions of providing services. Why should be some restrictions? Obviously, there is task of consultant and task of student. main idea of the research is student task. The consultant can not replace its task. Otherwise,the main task of consultant is to assist technical tasks. The task as follows:

Data Analytics Consultant Give Consider Methods

Due to lake of statistical analysis knowledge, student need help of data analytics consulting. They can assist a researcher to provide appropriate methodology and research design, rightly. Whether quantitative or qualitative research methods are suitable the research. Prospective research design or retrospective. Is it necessary observational studies, surveys, or even experimental design? Furthermore, they can assist technical task to input the data into computer. In term of  research methodology, the consultant has comprehensive overview. He can give alternative view of some research methods and its advantages. Indeed, the researcher has to choose appropriate one.

Statistical Data Analysis Services

Researchs use consultant when they have to time to running the data. Commonly, they use statistical software such as: SPSS, EVIEW, Amos, PLS, Lisrel. Hence, they would need to technically assist in the work of running data. It should not be lost is that researchers stay understand the processes and procedures of data processing. The reason to choose a particular software and reason to use certain analytical methods. Technical matters related to the running of data can be done by the consultant.

Provide Consultation of Data Analysis

One of the most important in research is to interpret the data. Without interpretation, statistical results only numbers without meaning. Researchers will discuss with data analytics consulting to find any figures. They may explain the data meaning on the software output.

Actually, consultant expected able to help researchers writing better research. Better use appropriate research methodology. Researchers, actually can just use a random method of statistical analysis and statistical software like SPSS and the output would be come.

Secondly, statistical consulting firms / company can give time efficiency to the researchers. Researcher could save time one or two weeks or even months. To learn various methods of research is not an easy thing. Moreover, statistical analysis methods are many kinds. Researchers who do not have background in mathematics or statistics may be difficult to learn strange terminology.

Restrictions of Data Analytics Consulting Task

Eventually, there are some limitations in using the services of consultants.

  • Researchers should not use the services of consultants to find main ideas of the research.
  • Researchers should not ask for the help of consultants to do plagiarism.
  • Researchers should not use consultants to replace the role of researchers.
  • Finally, Researchers should not assume that the consultants’ work is theirs
Descriptive Statistics Definition and Examples

Descriptive Statistics Definition and Examples

Descriptive statistics definition and examples will be discussed in this article. The definition is a method of statistical analysis that only describe the condition of the data examples. Descriptive statistics do not draw conclusions larger than the sample. For example: Researcher conducts a research with 100 students as sample size. The conclusions drawn only for 100 students, no more. An example of a descriptive statistics in the health subject: A study measures the prevalence of TB in certain areas. It measures the effectiveness of a drugs in healing a disease. Necessarily, its conclusion only cover for that area. Other example of descriptive statistics in economic research: Average movement of stock prices on the stock exchange. The volatility of commodity price indexes on the futures exchange.

Commonly, Descriptive statistics describes a variable. Possibly, it also illustrate between two or more variables. For example, study on gender towards blood pressure in a hospital. The variable only blood pressure and gender as control variable. We may increase the number of variable such as age or life style. In political study, gender, age range, educational level to be important variable in choosing a candidate for leader in the election. Descriptive statistics does not merely involve only one variable. Two or more variables are possible. Essentially, the conclusions only cover the sample size.

Descriptive statistics is different with inferential statistics. In inferential statistics, the conclusion can be drawn beyond the sample size. For example, a research using 100 students as sample size, but the conclusion draws for whole college. Researcher want to examine the study habits of students in a college that has a total of 20 thousand students. Researchers certainly don’t need to interview 20 thousand students. It is enough to interview 100 to 300 students  to get the conclusion of all college. 

Descriptive VS Inferential Statistics

Inferential statistical examples in health and medicine subject:  we want to examine effectiveness of anti-hypertensive drugs A compared to drugs B. It takes 50 sample size for control and 50 sample for treatment. In inferential statistics, the conclusion was not only for 100 samples but can be greater. Another example in economics: the effect of fiscal policy toward economic growth in a country. The study uses time series data samples for 5 years. In inferential statistics, conclusions can be more than 5 years or more.

Key word of inferential research is testing the hypothesis. we temporarily test the allegations of researchers by using little data or samples of the real situation. hypothesis testing is what distinguishes descriptive statistics and inferential statistics. In descriptive statistics, there is no hypothesis testing.

The similarity of descriptive and inferential statistics is both methods are quantitative method. They use statistical measurements as conclusions. Mean, median, mode and standard deviation are the statistical measurements. Merely, In a descriptive statistics,  mean, median and mode and standard deviation only describes the sample size. whereas, in inferential statistics, there is a standard error which is the gap of mean median and mode in between sample and the population.

Furthermore, we discuss descriptive a statistics definition and me examples, we need to analysis the software. SPSS is a familiar tools widely used in various field of science. In the next article, it will be peeled out.

Linear Regression Analysis Definition

Linear Regression analysis is a statistical analysis method predicting the relationship between the independent and dependent variables. Actually many other analytical methods that can to measure the relationship between variables, but the regression analysis focuses on the relationship between the dependent variable (dependent variable) with independent variable (independent variable). In particular, regression analysis helps researchers to determine changes in the dependent variable caused by the independent variable, where other variables is constant.

Linear Regression analysis includes in group of causality analysis. In causality relationship, one variable affects other variables. independent variable is random, while dependent variable is fix. Regression analysis is different with the correlation analysis where no variable that becomes the cause variable to another variable. Therefore, in statistics, the correlation between the two variables in the analysis are constant.

Theoretically, correlation analysis figures the relationship between variables that does not have a causal relationship. Both of these variables are related only by chance. For example, the relationship between weight and height’s student. The body weight and height is probably related but the weight, certainly not cause the height or vice verca. The relationship pattern is not the same as pattern on causal relationship. The effect of certain drugs against diseases is a good sample for causality relationship.

In Linear Regression analysis, we can involve one independent variable and one dependent variable, which is commonly called a simple regression analysis. We can also involve more than one independent variables with the dependent variable which is commonly called multiple regression analysis.

Linear Regression Analysis Formula

Simply, Regression has formula as follows:

Y = a+bx+e

Y represents the independent variable. it is the response variable that will change when the x variable changes. a represents a constant/ intercept. It is a basic value that is not influenced by X variabel. Meanwhile, b represents the regression coefficient. It shows the influence value of x variabel to y. e for error is the gap between population model and sample model.

Validity and Reliability of Survey Instruments

In this article, we will discuss theory about the validity and reliability of survey instrument prior to reliability. The validity is a test to measure the questionnaire content. There are three types of validity of which is:

1. The validity of contents

When a test use content validity, the items designed in a survey instrument should represent all possible indicators. The items designed as individually question allows other items broadly. Instantly, where a test tool to measure a person’s aptitude, the test tool will be difficult to define, a judgment of an expert may be able to provide an assessment of the relevance of the question items.

2. The validity related to criterion (criterion-related validity)

A measuring survey instrument is valid when it is able to measure its effectiveness in measuring criteria or indicators. In such validity, measurement or procedure will be measured by the value that has been proven valid or reality on the ground. An example, the test validity of written test for a driver, the test can be valid if the driver able to drive properly in the reality.

Test the validity of the criteria can be divided into two groups.

a. Concurrent validity

Is a validity test for survey instrument in which the criteria is similar with the benchmark. An example examines level of depression: a test has concurrent validity if it is to measure the level of depression that is being experienced by the participant. Simply, it able to measure the object in different time.

b. Predictive validity

While, predictive validity measures after testing. An example is the aptitude test, which helps measure the likelihood that someone is going to be successful on a carrier or a particular job. Researchers will only know after the study subjects underwent their work.

3. Construct Validity

Meanwhile, A validity test of survey instrument has construct validity if it is able to measure the relationship between predictions of the theoretical and empirical value.This construct validity measures the fit between the theoretical concept with more specific measurements items to measure the construct. While, Test intelligence is one example for testing the construct validity test. Finally, Researchers need to define variable then make specific measurements reflecting its latent variable.

Reliability

Reliability is consistency of research instrument when measured in different times or different subjects. Occasionally, two assessors conduct an instrument test and comparing the results. Whether it is similar, reflecting reliability. Likewise, a tools compared in different time. Consistent results mean the reliability of tool.

Originality Research Topics for College Students

In a scientific paper, originality of research topics is a major element when a student or researcher write paper/ thesis or research report. Originality is a novelty of the study. Good research has new findings that contribute both to science and real life. However, the challenge is how to find interesting topics while original. Since, many research topics are trapped on the plagiarism issue.

Change Research Population

If we dismantle a thesis or dissertation in the library, most of the topics refer to previous research. Whether the study whose similar topic with previous research stay original? The answer is yes. A thesis / dissertation /scientific paper has originality even though it involves previous research but different in location. For example, researchers conduct research on the effect of tariff or quota enforcement towards reducing product import in a country. Researchers in different countries can conduct research with the exact same variables. It is not plagiarism as long as researcher writes quotations with the correct rules. A study might involve the exact same variables with other studies. However, when the research location is different, it is still original paper.

In order to write a thesis / scientific research that has originality in topic, start by examining the phenomena that occur around you. Start browsing on the internet whether has similar topics of study. If there is a similar topic, begin to discover whether the conditions in the study are the same as the conditions in the phenomena you observe. If the conditions are not the same then your research topic might be original.

Combining Several Research Topics

Another way to write an original research is to combine several research topics into your main research topic. Certainly, you must read many previous research references. Furthermore, you look for the common thread of each study. Usually, between studies have a link. among the links there is a common thread to become your research topic. This way is not plagiarism.

Example in economics subject, we read some following research topics: the impact of the rise in the economy on the domestic economy; The impact of interest rates on stock index; The impact of the global economy towards domestic economy. Based on the three research topics above, we can combine into new research paper topic. for example, the influence of the global economy on the domestic economy.

In addition, you also need to pay attention on the theoretical basis that supports so that between variables interconnected.

Descriptive vs inferential statistics

Descriptive statistics definition is different with inferential statistics. Descriptive statistics only describes condition of the data through parameters such as mean, median, mode, frequency distribution and other statistical measurements. While inferential statistics conclude hypotheses based on sample data into population conclusion. In the descriptive statistics, we need to present:

1. Central tendency. Central tendency measurement most used is frequency distribution. These statistical measures are suitable for nominal and ordinal data (categorical data). While the mean is a measurement of central tendency for continuous data. Other descriptive measurement for central tendency is median (mid value) and mode (most frequent value).

2. Dispersion. Standard deviation is a dispersion measurement to represent spread of the data. It is suitable to measure diversity of numerical or continuous data. For categorical data, Range is suitable measurement.

 

Inferential vs Descriptive Statistics

While, inferential statistics is to conclude hypothesis based on the data samples into more general conclusions as whole population. Inferential research is needed if the researcher has limited research budget more efficiently so as to research done by taking several of samples less than the whole population. In the inferential study, conducted prediction. Inferential statistics requires the fulfillment of assumptions. The first assumption must be met is randomization process in sampling. This is necessary because the inferential statistics need representative population. Other assumptions that need to be met is depend on the analysis tools used. In the multiple regression analysis, the assumptions must meet multicollinearity, heteroscedasticity, autocorrelation and normality.

Statistical analysis methods used in inferential statistics are T-test, ANOVA, Anacova, regression analysis, path analysis, Structural equation modeling (SEM) and other analysis methods depending on the purpose of research. In inferential statistics, we examine hypothesis to determine whether a statistical measurement represent broader conclusions in the population. Measurement such statistics will compared to the population distribution pattern as the norm. Therefore, knowing the pattern of sample distribution to be important in inferential statistics.

Inferential Statistics in Practice

A good example of inferential statistics is in the presidential election. Many agencies conduct quick count survey to get quickly result, therefore knowing the elected presidential more quickly. The survey agency take several polling stations called TPS  as sample of the total population. TPS sample are used to generalize the overall population. Say, taken 2,000 samples of 400,000 population. The results of 2,000 polling stations are descriptive statistics. Whereas if we take the conclusions of the 400,000 polling stations is inferensial. the strength of inferential statistics depends on sampling techniques and the randomization process. If the randomization process is done correctly, then the result able to predict the population precisely. Therefore, it can save money  and time.

In the manufacturing industry, inferential statistics are very useful. Management can determine and control how many products outside the standard or defective by taking a few samples. Imagine if the management company must check all the products just to find out the defect. Certainly will spend many time and cost. Especially if we have to check all the products are packaged. Certainly not effective and efficient. Fortunately, there are Six Sigma, one of the tools used in this regard. Six Sigma principles using inferential statistics take product samples and measuring sigma or standard deviation (a measure of diversity) of the product. The number of defective products shall not exceed the certain standards.

Source reference:

1. http://www.socialresearchmethods.net/kb/statinf.php
2. https://statistics.laerd.com/statistical-guides/descriptive-inferential-statistics.php

Exploratory factor analysis vs confirmatory factor analysis

cfa

This article will discuss differences between exploratory factor analysis and confirmatory factor analysis. Exploratory factor analysis is abbreviated wit EFA , while the confirmatory factor analysis known as CFA .

About Exploratory Factor Analysis (EFA)

EFA is a statistical method to build structural model consisting set of variables. EFA is one of the factor analysis method to identify the relationship between the manifest variables in building a construct. Researcher also mention manifest variables as indicators variable. A researcher uses EFA when he does not have a beginning information in grouping set of indicators. So researchers set of indicators (manifest) then create variables. In conditions where the latent variables does not have clear indicators, the EFA is an appropriate method. Possibly, indicators of the latent variable indicators of possible overlap with other latent variables.

Researchers can use SPSS software to analyze EFA. All data of indicator input into the software. Therefore there is no assumption group of indicators. In EFA, we do not know how many factors or latent variables will create. Although researchers allow to determine how many the expected number of factors.

Factor loading is a measurement indicating into which group an indicator will gather. When the value is greater then these, then indicators will gather in the same factors.

About Confirmatory Factor Analysis (CFA)

CFA is one of factor analysis, commonly in social research. This method examines whether statistically the indicators gather consistently in a group. In the CFA, researchers test whether the data fit to the model established previously or not. The fundamental difference between the CFA and EFA is: in the CFA, researchers have prior assumption that indicators fit into a certain latent variables. Researcher has develop a hypothetical model based on the theoretical framework or previous studies referenced.

Therefore, there is an established model to examine, then the CFA test the model. CFA is a part of Structural Equation Modeling (SEM).

Fitness Measurement in CFA is the same with SEM fitness index. Chi Square, RMSEA, GFI, AGFI are some fitness index to use beyond the weighted value of each indicator.

The similarity of EFA and CFA

One of the similarity between EFA and CFA is a variance to measure the contribution of construct variables.

Normal Distribution Generator in excel

normal distribution

source: Fonlinecourses.science.psu.edu

In a study, sometimes we face a limited amount of data and data is not normally distribution. Whereas statistical data require much or at least 30 data to meet the parametric prerequisites. The question is whether allowing to use normal distribution generator with a limited numbers. The answer is allowed. Even if we only know the value of the mean (average) and standard deviation of the data, then we can perform simulations to 1000 data or more. For example, we know the value of the mean = 20 and the standard deviation is 5, and the sample from population that is normally distributed. To obtain normally distributed random numbers or a particular distribution. we can use Monte Carlo simulation using various software available.

The steps to simulate the limited data that follow a particular distribution pattern is as follows:

1.Define value starting point

To get the next random number requires starting point. However, the numbers starting point does not significantly affect the data simulation, because the starting point of this figure is just one figure among the thousands of numbers that will be obtained based on the simulation results.

 monte carlo simulation 1

2. Determine the expected population distribution

Prior to the simulation data, we must determine the distribution assumption of population data that we expect. For example, we assume that the data will follow a normal distribution pattern.

We need to know the various types of distribution in accordance with scale of the data.

If the data is a numerical scale that allows below distribution: normal distribution, log-normal, exponential, and others.

Meanwhile, if the scale is categorical, so the distribution are: binomial distribution, uniform distribution, multinomial distribution, hiper geomertric distribution and so on.

 monte carlo simulation 2

3. Determine the required assumptions for population distribution

Every distribution has certain statistical parameters. For example, if we assume a normal distribution, then at least we should know two parameters: mean and standard deviation. these two parameters will be used to generate other data.

 monte carlo simulation 3

4. Running data based on the assumption

After determining the necessary assumptions, the next process is running data. We can process/running using iteration until 1000 and even up to more than 1000 times. If we do a running 1000 times then we will get 1000 random numbers that follow the pattern of distribution that we choose.

 monte carlo simulation 4

5. Make reports

Once the data is complete for running, then the output can be clicked to display the output of any report required

 monte carlo simulation 5

The result is that we will get random numbers 1000 data that follow certain patterns of distribution, such as the normal distribution. Obviously the parameters mean (average) and standard deviation will follow the 1000 data from the above simulation. Expected with more iterative process, it will produce a smooth data approaching population data.

Similarly, limited data into a simulation process Monte carlo simulation using Crystal Ball software by Oracle.

SEM Structural Equation Modeling with Lisrel, AMOS or SMARTPLS?

olah data sem

SEM (Structural Equation Modeling), or better known as SEM is a multivariate statistical analysis method. SEM has differences in data processing regression or path analysis. SEM data processing is more complicated because SEM built by both measurement model and structural model.

To process SEM data easily, we need a statistical software. There are many software for SEM data such as: Lisrel , AMOS and Smart PLS . Among the statistical software, which is suitable for use. below’s a short review :

Advantages of Lisrel for SEM Structural Equation Modeling

Lisrel developed by Karl Joreskog and Dag Sörbom. Lisrel statistical software is the most familiar among researchers and practitioners. The advantages of the LISREL software is its ability to identify relationships between variables are complex. How to operate it consists of a choice, either with syntax and simple lisrel, making it more widely used in various discipline. Syntax certainly be favored for users who are familiar with the programming language. While SIMPLIS or simple LISREL is an alternative for those who are unfamiliar with programming languages​.

A selection of various methods of estimation are available in lisrel, so do not cling to the Maximum Likelihood estimation method. It depends on the condition of the data, estimation methods which will be used.

Disadvantages of Lisrel

One of the disadvantages is its inability to process data with small sample size. When we have a sample of less than 200 samples, while the model is complex, it is sometimes the estimation results are not in line with our expectations.

Advantages of Amos for SEM Structural Equation Modeling

As with SPSS, AMOS is a statistical software developed by IBM. Amos software helps to examine the hypothesized relationship between variables. Through this software, we can determine the level of the strength of the relationship between both variables between the latent variables and the manifest variables. How significant relationship between the variables, and how fit model hypotheses compared with real data field.

With Amos, we do not need syntax or complicated programming language to operate the software. For beginners, or those who are unfamiliar with the programming language is an advantage. Through Amos, we simply describe the latent variables and manifest variables, and then connect it using arrows are available.

Disadvantage of Amos

Advantages of Amos as well as its disadvantages. We need to create many images when the model is complex, and it was very tedious work. Whereas in Lisrel, it is more simple with programming language. We just copy and duplicate the syntax, then running, then complete the model, as complex as any models we want.

Advantages of Smart PLS

Smart PLS or Partial Least Square is a statistical software with the same goal with lisrel and AMOS. They examines the relationship between variables, good fellow latent variables and indicator variables, or manifest.

Researcher uses Smart PLS when the subject has limited number of sample. While the model is built complex. It is not run when we use Lisrel or Amos. they require the adequacy of the sample.

Another advantage of Smart PLS is its ability to process data both the  formative and reflective SEM model. Formative SEM models is a model when the indicator variables form a construct variable. So, the arrow head to construct variable from indicator variables. Otherwise, Reflective SEM Model is a model when a construct variable reflects its indicator variables. So that the arrow heads from the construct variable to its manifest variables. Statistically, the consequence is that there will be no error in the value of the indicator variable.

Disadvantages of Smart PLS

Therefore this software only to process data in small size, it is not suitable for research with large sample.

How to analyze questionnaire data using SPSS

How to analyze questionnaire data using SPSS

How to analyze questionnaire data? It must pass through various stages, ranging from data entry into the computer processing through SPSS or Ms. Excel, testing the validity and reliability, descriptive analysis and hypothesis testing. Here is the stages:

1. Validity and Reliability

What distinguishing between questionnaire data processing method with secondary data are validity. When we conducted the study with a questionnaire, so we need to test the validity and reliability of the questionnaire. Why need to do? because the questionnaire was arranged by researcher, meanwhile answering the questionnaire is respondent. The purpose is to minimize interpretation gap between researcher and respondent.

Moreover, Good questionnaire should be well understand by respondents as good as the questionnaire maker. A Questionnaire should has high level of consistency over times.

Otherwise, in secondary data, we do not need to test the validity and reliability.

2. Entry Data

Furthermore, After the questionnaires collected, it needs to input the data into a computer. The most common software for data entry is excel. Surely, spread sheet Excel are familiar among us. How to arrange the data in spread sheet. Stacking down in the spread sheet is the respondents. Meanwhile, the column fill by the item number or questionnaire answer. Likewise, Input data into SPSS is similar with spreadsheet Excel. The data arranges on row as respondent and column as question.

For closed questions, we can give score for each answer option in your question. For example, the answer: strongly agree = 5, agree = 4, neutral = 3, disagree = 2, and strongly disagree = 1. Only the score input into spread sheet.

In certain conditions, negative questions are possible? In such conditions, reversely the score of 5 changes to 1, 4 to 2 and so on.

 3. Descriptive analysis

To present the questionnaire results, researcher need to process the data using descriptive analysis. What type of graph is suitable for secondary data? Frequency distribution format is a common to present in descriptive. The display is presented how the number of respondents who answered agree, how that answer did not agree and so on.
In descriptive statistics, common measurement need to provide such as: mean, median, mode and standard deviation. However, when we provide ordinal data as mean and standard deviation, in fact we’re treating these data into numeric data.

4. Hypothesis testing to analyze questionnaire data

Is a questionnaire research able to test a hypothesis? The answer is sure. Actually, Likert scale questionnaire data is ordinal data. It is most appropriate statistical technique is non-parametric techniques. However, due to limitations of statistical tools in non-parametric analysis, somehow data transformation is applied to transform ordinal data into a numerical scale. Even though, transformation method is not a must, as long as the data distribution is normal, then statistical parametric methods can apply.

Having trouble in questionnaire data processing? contact us at +6281321898008

Data Analysis in Research Quantitative and Qualitative

Data Analysis in Research Quantitative and Qualitative

Research method is different with research techniques although it is similar in term of word. Data analysis method refers to a more general approach and data analysis technique is part of data analysis method. Research methods broadly divided into two parts: quantitative and qualitative method. Whereas, data analysis in quantitative research has various analytic techniques such as correlation techniques, regression, comparative, descriptive and such things.

Methods of data analysis in quantitative is a data processing approach through statistical or mathematical methods from primary or secondary data. The advantages of this method is more comprehensive.

Data Analysis in Quantitative Research

Quantitative data analysis method consists of several analytic techniques such as:

1. Descriptive analysis, we describe the results of the data collected it is through statistical measures such as mean, median, mode, and standard deviation.

2. Comparative analysis, we compare the fruit phenomenon with other phenomena, or we compare the same phenomenon in different subject groups.

3. Correlation Analysis, we examine the connection between a phenomena with other phenomena which in theory previously.

4. Causality Analysis, we question again causality between several phenomena in theory actually has allegedly interplay.

Quantitative data analysis methods are familiar in science, economics, engineering, medicine. Although, today many social research method applies in exact sciences, economics, engineering, medicine. So the approach used methods often use a quantitative approach.

Data Analysis in Qualitative Research

Methods of qualitative data analysis is a data processing approach in depth observations, interviews, literature data. The advantages of this method is in depth of study results.

Qualitative data analysis methods are more widely used in social sciences, law, sociology, politics, etc. Currently, many social subjects use quantitative methods. Qualitative methods provide advantages in terms of depth of analysis that is required in the social field. How can examine the culture of a particular ethnic group without a profound observation. How can explore the philosophical aspects of specific provisions in the law if no in-depth review. Obviously for such things necessary qualitative methods.

Qualitative data analysis method consists of a variety of analytical techniques such as:

1. Method of data analysis is necessary to organize qualitative data to be more organized. Why is this necessary? because as we all know that in qualitative research there is no such thing definite measurement, moreover a standardized scale such as in quantitative research.

2. The method of data analysis is necessary: ​​Coding of data needs to do, because the measures of data mostly in the form of verbal rather than in the form of numeric, the researchers need to do coding for homogenize some things that have the same meaning.

3. The method of data analysis is necessary: ​​Connect one concept with other concepts that may influence each other, the size of the relationship or influence can not be described by numbers.

4. method of data analysis is necessary to legitimate the results by comparing another concept that we-think contrary to the conclusions. How many other concepts that are contrary to the conclusions.