ROLES OF STATISTICAL ANALYSIS CONSULTING FIRMS / COMPANY

ROLES OF STATISTICAL ANALYSIS CONSULTING FIRMS / COMPANY

Statistical analysis consulting firms / company, as a third party to help researchers or students who will get a master’s degree or doctoral degree, has multiple roles and tasks as well as restrictions in providing services. Why should be a limitation? Obviously because it is associated with academic activities and scientific works. Do not let the role and tasks of statistical analysis consulting firm services replaces the role of the student in completing their research.
The main task of the consulting firm statistical analysis is as follows
Provide Recommendations of Research Design and Methodology

Statistical analysis consulting firms / company is needed by student or candidate of doctoral degree because they have limitations in terms of knowledge of research methodology. Statistical consulting firm tasked with providing input related to the appropriate methodology and research design right. Whether quantitative or qualitative research methods are suitable for use by researchers. Furthermore, what if prospective research design or retrospective. Is it necessary observational studies, surveys, or even experimental design. Naturally statistical consulting firm needs to provide input. The difference consulting firm with researchers whose role is that they only provide a comprehensive overview, advantages and other research methods such disadvantages. Ultimately researcher is the party who decide what methods they use.

Statistical Data Analysis Services

Statistical consulting firms / company needed in cases researchers do not have the ability or time limitations to perform data processing through statistical software like SPSS, EVIEW, Amos, PLS, Lisrel, etc. Statistical analysis consulting firm would need to technically assist in the work of running data. But, It should not be lost is that researchers stay understand the processes and procedures of data processing. Why choose a particular software and why to use certain analytical methods, this needs to be understood by researchers. Technical matters related to the running of data can be done by statistical analysis consulting firm.

Provide Consultation of Data Analysis

Interpret the output of statistical software is an important part that the collected data into something meaningful. without interpretation, statistical results only in the form of numbers without meaning. Researchers have an important role in discussing any findings of statistical figures. But statistical analysis consulting firms / company has an important role in explaining what the numbers meaning on the table of the software output.

With the cooperation between statistical analysis consulting firms / company and researchers, is expected researchers can make better research. The better are able to use appropriate research methodology, statistical analysis method that is appropriate. Researchers, actually can just use a random method of statistical analysis and statistical software like spss and the output would be come. However, whether these analytical methods are appropriate? Naturally statistical analysis consulting firm can provide consideration.

Secondly, statistical consulting firms / company can give time efficiency to the researchers. Researcher could save time one or two weeks or even months. To learn various methods of research that a lot is not an easy thing. Moreover, statistical analysis methods are many kinds. Researchers who do not have a background in mathematics or statistics would be difficult to study statistical terms that is seldom heard.

Statistical consulting firms / company may be used by students of doctoral or master’s programs, as long as they do not replace the role as a researcher. Statistical consulting firm just give appropriate consideration of the various methods in which the decision to remain in the research. They can provide technical assistance process of running data, but researchers must still understand the process. Furthermore they only served as a third party that provides consulting services without being involved as researchers.

Linear Regression Analysis Definition

Linear Regression analysis is a statistical analysis method used to predict the relationship between the independent and dependent variables. Actually many other analytical methods that can be used to measure the relationship between variables, but the regression analysis is focused on the relationship between the dependent variable (dependent variable) with independent variable (independent variable). In particular, regression analysis helps researchers to determine changes in the dependent variable caused by the independent variable, where other variables unchanged.

Linear Regression analysis is included in group of causality analysis. In causality relationship, one variable affects other variables. independent variable is random, while dependent variable is fixed. Regression analysis is different with the correlation analysis where no variable that becomes the cause variable to another variable. Therefore, in statistics, the correlation between the two variables in the analysis are fixed.

Theoretically, correlation analysis is used to figure the relationship between variables that does not have a causal relationship. Both of these variables are related only by chance. For example, the relationship between weight and height’s student. The body weight and height is probably related but the weight, certainly not cause the height or vice verca. The relationship pattern is not the same as pattern on causal relationship. The effect of certain drugs against diseases is a good sample for causality relationship.

In Linerar Regression analysis, we can involve one independent variable and one dependent variable, which is commonly called a simple regression analysis. We can also involve more than one independent variables with the dependent variable which is commonly called multiple regression analysis.

Validity and Reliability of Survey Instruments

In this article, we will discuss firstly, theory about the validity of survey instrument prior to reliability.The validity test is a test to measure the questionnaire content. There are three types of validity of which is:

1. The validity of contents

When a test use content validity, the items designed in a survey instrument should have been representing all possible indicators. The items designed as individually question allows other items broaderly. In some instances where a test tool is used to measure a person’s aptitude, the test tool will be difficult to define, a judgment of an expert may be able to provide an assessment of the relevance of the question items.

2. The validity related to criterion (criterion-related validity)

A measuring survey instrument is said to be measured by test the validity of the criteria if it is able to measure its effectiveness in measuring criteria or indicators. In such validity, measurement or procedure will be measured by the value that has been proven valid or reality on the ground. An example, the test validity of written test for a driver, the test can be valid if the driver able to drive properly in the reality.

Test the validity of the criteria can be divided into two groups.

a. Concurrent validity

Is a validity test for survey instrument in which the criteria are measured along with test scores in survey questionnaire. An example is testing the level of depression, a test is said to have concurrent validity if it is to measure the level of depression that is being experienced by the participant. In short, it can be said that the measurements were carried out at the same time with what you want to measure.

b. Predictive validity

Is the case when the criteria are measured after testing. An example is the aptitude test, which helps measure the likelihood that someone is going to be successful on a carrier or a particular job. Researchers will only know after the study subjects underwent their work.

3. Construct Validity

A validity test of survey instrument is said to have construct validity if it is able to measure the relationship between predictions of the theoretical and empirical value. this construct validity measures the fit between the theoretical concept with more specific measurements items to measure the construct. Test intelligence is one example for testing the construct validity test. Researchers who conduct intelligence measurement need to define the intelligence then make specific measurements which can measure a person’s intelligence.

Lisrel Structural Equation Modeling

Lisrel, Structural Equation Modeling Analysis (SEM) that can be facilitated by lisrel and Amos. However, before discussing about what software is suitable for SEM analysis, I will discuss briefly how the concept of the SEM itself. SEM is one of the techniques of statistical analysis that aims to test the causality between one or more exogenous latent variables toward one or more endogenous latent variables. What is exogenous and endogenous latent variables? Exogenous latent variables are variables that cause other variable and it can not be observed directly. Whereas endogenous latent variables are variables caused by exogenous variables and it can not be observed directly. why it is called latent or can not be observed directly? Indeed, we can not directly measure these variables. An example is the motivation variable. We can not directly measure one’s motivation is like, except through the indicator. By contrast, if we want to measure weight, we can easily measure through kilogram and had only one measurement.

Lisrel, Structural Equation Modeling Analysis (SEM) analysis suitable for measuring the causality of variables that can not be measured directly as motivation, performance, attitude, perception and other social variables. Through SEM analysis by lisrel or Amos, we can map the pattern of relationships among these variables. Both of these software can measure the pattern of the relationship between latent variables and the relationship between latent variables to the indicator. Weighting coefficients between the latent variables show the influence of a latent variables to other latent variables. While the relationship between the latent variable indicators can be seen through the weight coefficients. It shows how much the existing indicators can represent latent variables.

Lisrel and Amos Software did not have a significant difference in the Structural Equation Modeling Analysis (SEM) analysis. The both are equally powerful to generate the SEM models. The only difference is in the operation technically. Users required at least the basic ability of logic programming when using Lisrel. But do not worry in advance because the lisrel, we are able to operate through the existing menu. Although it more be simplier, when we use the syntax. LISREL software will instantly conjures images LISREL model automatically when the syntax in-running correctly. While the Amos, users do not need to understand the basics of programming logic because it is not at all necessary programming syntax for running the model. Users simply draw the model manually. This convenience can simultaneously be a shortage of Amos. Drawing complicated models can be quite a tedious job. Moreover, user must define the name of the indicator and the error.

in the both software (Lisrel and Amos) output,it will be describe the fitness of Structural Equation Modeling Analysis SEM model whether the proposed hypothesis model fits the data sample or not. measures shown that Chi Square, RMSEA, GFI and other measurement. These measures explain the gap between the hypothetical model with data sample. Several other measures indicate the level of error estimation. Good model is achieved when the gap between the hypothetical model and sample data model is not significantly different. 

Further, Lisrel and Amos were able to perform exploratory analysis through the recommendation models that most closely fit with the data.

 

multiple regression analysis spss interpretation

What is multiple regression analysis? As explained in the simple linear regression that the definition of regression analysis is a statistical method that models the relationship between the independent variable / predictor variable / response through a linear relationship depicted in mathematical equations. The fundamental difference between the multiple linear regression analysis and simple linear regression analysis lies in the number of independent variables (X). In the multiple regression independent variable more than one variable, while the simple regression only one variable.

Examples of multiple linear regression analysis is the relationship between income, savings on consumption. Income and savings are the predictor variables to predict the amount of consumption as the response variable.

Multiple regression analysis equation is descrbed as follows:

Y = B1 + B2X1 + B3X2 + … BiXk + e

Y is the response variable

X is the predictor variable

B1 is the intercept or constant

Bi is the regression coefficient or slope

i is an index for the regression coefficient to i

k is an index for the predictor variables to k

In multiple regression analysis, there are two tests which is test F and test t. F test is used to test the significance of the effect of predictor variables / independent variable on the response variable / dependent variables simultaneously or together. If the value of F higher than F table with a certain confidence level, it can be stated that simultaneously has the effect of variable X to variable Y. The second test is the t test. T test was used to test the partial regression models. Whether each predictor variable has an influence on the response variable partially. If the value of t is quite large and exceeds t table then individually these variables can be declared to have an influence on the variable X.

Adjusted R-square value is used to measure how much influence throughout the predictor variables included in the equation of the response variable in the multiple linear regression analysis. Why R square used in the multiple linear regression analysis was adjusted r-square. We must understand that the adjusted R-square is R-square value that has been adjusted by the number of predictor variables / independent variables. Correcting the adjusted R-square value of R-square value is actually caused by the addition of R square value by increasing the number of variables. Through our inevitable adjusted R-square of the errors fit the model assumed when it was only due to the increase in the number of variables that are not related.

Software used to perform multiple linear regression analysis modeling today is more diverse. SPSS software suitable for data derived from social studies, while the software E views suitable for calculating modeling in economics and finance. Although in fact we are still able to optimize the use of Microsoft Excel to perform modeling of multiple linear regression analysis.

What are the assumptions that must be met in a multiple linear regression analysis? in addition to the assumptions already mentioned in the simple linear regression analysis such as, linearity, homoskedastisitas, freed from the autocorrelation for time series data, there is an assumption that the assumptions multicollinierity.

Multicolinearity, multicollinearity is a perfect relationship between the predictor variables. On this test the model is expected to have no problem multicollinearity between predictor variables which have a perfect relationship. Why is not allowed in the model there are two or more variables that have a perfect relationship. In addition because it would interfere with the model, of course, two variables that have a perfect relationship in one model becomes ineffective. Should one of the variables excluded from other variables, because these variables has on-proxy-kan by other variables.

Source: http://pareonline.net/getvn.asp?n=2&v=8

Descriptive vs inferential statistics

Descriptive statistics definition is different with inferential statistics. Descriptive statistics only describes condition of the data through parameters such as mean, median, mode, frequency distribution and other statistical measurements. While inferential statistics conclude hypotheses based on sample data into population conclusion. In the descriptive statistics, we need to present:

1. Central tendency. Central tendency measurement most used is frequency distribution. These statistical measures are suitable for nominal and ordinal data (categorical data). While the mean is a measurement of central tendency for continuous data. Other descriptive measurement for central tendency is median (mid value) and mode (most frequent value).

2. Dispersion. Standard deviation is a dispersion measurement to represent spread of the data. It is suitable to measure diversity of numerical or continuous data. For categorical data, Range is suitable measurement.

While, inferential statistics is to conclude hypotesis based on the data samples into more general conclusions as whole population. Inferential research is needed if the researcher has limited research budget more efficiently so as to research done by taking several of samples less than the whole population. In the inferential study, conducted prediction. Inferential statistics requires the fulfillment of assumptions. The first assumption must be met is randomization process in sampling. This is necessary because the inferential statistics need representative population. Other assumptions that need to be met is depend on the analysis tools used. In the multiple regression analysis, the assumptions must meet multicolliniearity, heteroscedasticity, autocorrelation and normality.

Statistical analysis methods used in inferential statistics are T-test, ANOVA, Anacova, regression analysis, path analysis, Structural equation modeling (SEM) and other analysis methods depending on the purpose of research. In inferential statistics, we examine hypothesis to determine whether a statistical measurement represent broader conclusions in the population. Measurement such statistics will compared to the population distribution pattern as the norm. Therefore, knowing the pattern of sample distribution to be important in inferential statistics.

A good example of inferential statistics is in the presidential election. Many agencies conduct quick count survey to get quickly result, therefore knowing the elected presidential more quickly. The survey agency take several polling stations called TPS  as sample of the total population. TPS sample are used to generalize the overall population. Say, taken 2,000 samples of 400,000 population. The results of 2,000 polling stations are descriptive statistics. Whereas if we take the conclusions of the 400,000 polling stations is inferensial. the strength of inferential statistics depends on sampling techniques and the randomization process. If the randomization process is done correctly, then the result able to predict the population precisely. Therefore, it can save money  and time.

In the manufacturing industry, inferential statistics are very useful. Management can determine and control how many products outside the standard or defective by taking a few samples. Imagine if the management company must check all the products just to find out the defect. Certainly will spend many time and cost. Especially if we have to check all the products are packaged. Certainly not effective and efficient. Fortunately, there are Six Sigma, one of the tools used in this regard. Six Sigma principles using inferential statistics take product samples and measuring sigma or standard deviation (a measure of diversity) of the product. The number of defective products shall not exceed the certain standards.

Source reference:

1. http://www.socialresearchmethods.net/kb/statinf.php
2. https://statistics.laerd.com/statistical-guides/descriptive-inferential-statistics.php

Confirmatory factor analysis spss

Confirmatory factor analysis is a statistical method used to describe the variability among the variables that can potentially be clustered into groups called with a number of factors. Software most often these days to calculate the confirmatory factor analysis is SPSS, lisrel or Amos. For instance, we have 20 sets of indicators or question items. 20 sets of items such questions can be grouped into smaller groups, for example into 4 groups.

At first confirmatory factor analysis developed in the field of psychometrics. however, these days factor analysis has been widely used in various scientific fields such as social sciences, behavioral sciences, the science of marketing, and various science that often involves a lot of indicators that needed to simplify only a few factors. Indeed, factor analysis is very useful to study the problems which we are exposed to many sets of indicators / questions / variables. To deal with these conditions, the factor analysis is very helpful. Moreover, factor analysis was unable to confirm whether the items that go into the factors already fit with the theoretical model or not.

Confirmatory factor analysis can also be used to test the validity and reliability of the items the questions made by the researcher. Each item will question known how its loading factor and whether the item is already suitable grouped into the same latent variable. Confirmatory factor analysis would match the model to the model of latent indicators of a latent models. Therefore, confirmatory factor analysis is usually used before to analyze the structural model (Structural Equation Modelling). Capable of producing a score for latent variables make widely used confirmatory factor analysis to assist regression analysis that has several indicators. Suppose the variable expenditure or consumption, we are faced with more than one indicator. How to reduce a wide range of variables without reducing the existing information, the confirmatory factor analysis solution.

What is the mechanism confirmatory factor analysis umpteen grouping sets of items into smaller groups? This was done based on the variability between the item itself. Mathematically, the relationship between the items in one factor is modeled in the form of a linear equation function. By calculation, the technique is to use similarity matrices variance values. Again, confirmatory factor analysis aims to reduce the number of sets of questions are becoming fewer groups with no loss of information content.

The confirmatory factor analysis has similarities with several other statistical analysis techniques including the Principal Component Analysis (principal component analysis), Cluster Analysis (claster analysis) and Regression Analysis (regression analysis). Compared with Principal Component analysis, confirmatory factor analysis have similarities in terms of reducing umpteen sets of items indicators / questions into several groups / factors. Difference between the two lies in the principal component analysis is descriptive analysis, it is only a balanced outcome of several groups without any confirmation. while confirmatory factor analysis is inferential statistics, ie to confirm whether the group has been formed in accordance with the latent variable or not. Further explain how the gap / error between them. Confirmatory factor analysis compared with cluster analysis have similarities in terms of classifying the many observations into several clusters / groups of observation. The difference is grouped is the subject of research / analysis unit / respondent. Not item questions / indicators / variables. Confirmatory factor analysis compared with the regression analysis have similarities in terms of the use of models of linear equations. they both use the function is a linear equation to explain the relationship between the indicator / question / variable. The difference lies in the amount of the model. Analysis of factors will result in several models depending on the number of factors that are formed. While the confirmatory factor analysis will only produce one model that will be tested its feasibility.

http://en.wikipedia.org/wiki/Factor_analysis

http://www.statsoft.com/Textbook/Principal-Components-Factor-Analysis

Exploratory factor analysis vs confirmatory factor analysis

cfa

This article will discuss differences between exploratory factor analysis and confirmatory factor analysis. Exploratory factor analysis is abbreviated wit EFA , while the confirmatory factor analysis known as CFA .

About Exploratory Factor Analysis (EFA)

is a statistical method used to build the model structure consisting of a set or many variables. EFA is one of the factor analysis method to identify the relationship between the manifest variables or variable indicators in building a construct. EFA is used in circumstances where the researcher does not have a beginning or hypotheses information should be grouped into any variable set of indicators that have been made. so researchers set of indicators (manifest) then form variables. EFAs are also used in conditions where the latent variables have not been clear indicators. indicators of the latent variable indicators of possible overlap with other latent variables.

Researchers can use SPSS software to analyze EFA. input for this software is data of indicator variables. Therefore there is no assumption anywhere indicators will be grouped, then in the analysis of EFA, we do not know how many factors or latent variables that will be formed. Although allowed researchers to determine how the expected number of factors.

Measures that indicates that an indicator is grouped into the same group is value of factor loading. When the value is greater then these indicators can be grouped into the same factors.

About Confirmatory Factor Analysis (CFA)

Is one form of factor analysis is also particularly used in social research. the main purpose is to test whether the indicators that have been classified previously into a group will be consistent, if it be tested statistically. In the CFA, researchers test whether the data fit to the model established previously or not. The fundamental difference between the CFA and EFA is in the CFA, researchers already have the initial assumption that indicators fit into a certain latent variables. In the beginning, researchers have developed a hypothetical model based on the theoretical framework or previous studies referenced.

Therefore there are established and construct a model to be tested, then the CFA test the model. CFA is seen as a part of Structural Equation Modeling (SEM).

fitness Measurement used in the CFA is the same with SEM fitness index. Chi Square, RMSEA, GFI, AGFI are some fitness index will be used beyond the weighted value of each indicator.

The similarity of EFA and CFA

One of the similar thing of both is using variance as measurement to represent contribution of construct variables.

simulate normal data in excel

normal distribution

source: Fonlinecourses.science.psu.edu

In a study, sometimes we are faced with a limited amount of data. whereas statistical data require much or at least 30 data to meet the parametric prerequisites. The question is whether allowing us to simulate the data with a limited number so that it becomes more. The answer is allowed. Even if we only know the value of the mean (average) and standard deviation of the data, then we can perform simulations to 1000 data or more. For example, we know the value of the mean = 20 and the standard deviation is 5, and the sample from population that is normally distributed. To obtain normally distributed random numbers or a particular distribution we can use Monte Carlo simulation using various software available.

The steps to simulate the limited data that follow a particular distribution pattern is as follows:

1.Define value starting point

The starting point is required to get the next random number. However, the numbers starting point does not significantly affect the data simulation, because the starting point of this figure is just one figure among the thousands of numbers that will be obtained based on the simulation results.

 monte carlo simulation 1


2. Determine the expected population distribution

Prior to the simulation data, we must determine the distribution assumption of population data that we expect. For example, we assume that the data will follow a normal distribution pattern.

We need to know the various types of distribution in accordance with scale of the data.

If the data is a numerical scale that allows below distribution: normal distribution, log-normal, exponential, and others.

Meanwhile, if the scale is categorical, so the distribution are: binomial distribution, uniform distribution, multinomial distribution, hiper geomertric distribution and so on.

 monte carlo simulation 2

3. Determine the required assumptions for population distribution

Every distribution has certain statistical parameters. For example, if we assume a normal distribution, then at least we should know two parameters: mean and standard deviation. these two parameters will be used to generate other data.

 monte carlo simulation 3

4. Running data based on the assumption

After determining the necessary assumptions, the next process is running data. We can process/running using iteration until 1000 and even up to more than 1000 times. If we do a running 1000 times then we will get 1000 random numbers that follow the pattern of distribution that we choose.

 monte carlo simulation 4

5. Make reports

Once the data is complete for running, then the output can be clicked to display the output of any report required

 monte carlo simulation 5

The result is that we will get random numbers 1000 data that follow certain patterns of distribution, such as the normal distribution. Obviously the parameters mean (average) and standard deviation will follow the 1000 data from the above simulation. Expected with more iterative process, it will produce a smooth data approaching population data.

Similarly, limited data into a simulation process Monte carlo simulation using Crystal Ball software by Oracle.

lisrel structural equation modeling with Lisrel, AMOS or SMARTPLS

olah data sem

Structural Equation Modelling , or better known as SEM is a multivariate statistical analysis method. SEM has differences in data processing regression or path analysis. SEM data processing is more complicated because SEM built by both measurement model and structural model.

To process sem data easily, we need a statistical software. There are many software for SEM data such as: Lisrel , AMOS and Smart PLS . Among the statistical software, which is suitable for use. below’s a short review :

Advantages of Lisrel

Lisrel developed by Karl Joreskog and Dag Sörbom. Lisrel statistical software is the most widely used among researchers and practitioners. The advantages of the LISREL software is its ability to identify relationships between variables are complex. How to operate it consists of a choice, either with syntax and simple lisrel, making it more widely used in various discipline. Syntax certainly be favored for users who are familiar with the programming language. While SIMPLIS or simple LISREL is an alternative for those who are unfamiliar with programming languages​.

A selection of various methods of estimation are available in lisrel, so do not cling to the Maximum Likelihood estimation method. It depends on the condition of the data, estimation methods which will be used.

Disadvantages of Lisrel

One of the disadvantages is its inability to process data with small sample size. When we have a sample of less than 200 sampels, while the model is complex, it is sometimes the estimation results are not in line with our expectations.

Advantages of Amos

As with SPSS, AMOS is a statistical software developed by IBM. Amos software is dedicated to help test the hypothesized relationship between variables. Through this software, we can determine the level of the strength of the relationship between both variables between the latent variables and the manifest variables. How significant relationship between the variables, and how fit model hypotheses compared with real data field.

With Amos, we do not need syntax or complicated programming language to operate the software . For beginners, or those who are unfamiliar with the programming language is a an advantage. Through Amos, we simply describe the latent variables and manifest variables, and then connect it via arrows are available.

Disadvantage of Amos

Advantages of amos as well as its disadvantages. We need to create many images when the model is complex, and it would be a very tedious work. Whereas in Lisrel, the work can be done with a more simple with programming language. We just copy and duplicate the syntax , then running , then complete the model, as complex as any models we want.

Advantages of Smart PLS

Smart PLS or Partial Least Square is a statistical software with the same goal with lisrel and AMOS, to examine the relationship between variables, good fellow latent variables and indicator variables, or manifest.

Smart PLS ​​is used when we have limited number of samples while the model is built complex. this can not be done when we use both the software above. they require the adequacy of the sample.

Another advantage of Smart PLS is its ability to process data both for the model SEM formative or reflective model. formative SEM models have characteristics which are latent variables or constructs built by the indicator variable where the arrow head of variables to construct an indicator variable. Reflective SEM Model is a model where the variable SEM constructs a reflection of indicator variables, so that the arrow leading from the indicator variables to latent variables. Statistically, the consequence is that there will be no error in the value of the indicator variable.

Disadvantages of Smart PLS

Therefore this software only to process data in small size, it is not suitable for research with large sample.

How to analyze questionnaire data using spss

How to process questionnaire data? It must pass through various stages, ranging from data entry into the computer processing through SPSS or Ms. Excel, testing the validity and reliability, descriptive analysis and hypothesis testing. Here is the stages:

<strong> 1. Validity and Reliability </strong>

what distinguishes between questionnaire data processing method with secondary data are validity. When we conducted the study with a questionnaire so we need to test the validity and reliability of the questionnaire. Why need to do? because the questionnaire was arranged by researcher while filling the questionnaire is respondent. This test is done to minimize the gap miss interpretation between researcher and respondent.

Good questionnaire should be well understood by respondents as good as the questionnaire maker. Questionnaire should have level of consistency if filled at different times.

On secondary data, we do not need to test the validity and reliability.

<strong> 2. Entry Data </strong>

After the questionnaires collected, we have the next task is to do data entry from paperwork into a computer. The most common software for
Data Entry is excel. Surely excel are familiar among us. How to? We arrange the data in the form of a matrix. Rows and columns to the respondent for all items. On the line was the entry from one respondent to respondent amount of your sample. While in the field, was the entry of data by item question a number of questions in the questionnaire.
For closed questions that you have a score for each entry is the answer to your question. For example, answer the questionnaire was very untested = 5, agree = 4, neutral = 3, disagree = 2, and strongly disagree = 1. So who should you entry into the questionnaire was the score.
What if there is no negative questions in our questionnaire? If there are such conditions, then you change the score of 5 to 1, 4 to 2 and so on.

<strong> 3. Descriptive analysis </strong>

Results of the questionnaire data processing is often displayed in descriptive analysis. What display is suitable for secondary data? Frequency distribution format is a common to display descriptive data. The display is presented how the number of respondents who answered agree, how that answer did not agree and so on.
If required we can also show other descriptive statistical measures in the form of mean and standard deviation. However, it should be noted that when we show ordinal data as mean and standard deviation, in fact we’re treating these data into numeric data.

<strong> 4. Hypothesis testing </strong>

Is a questionnaire testing a hypothesis? The answer of course. Actually, Likert scale questionnaire data is ordinal data. It is most appropriate statistical technique is non-parametric techniques. However, due to limitations of statistical tools in non-parametric analysis, ma sometimes people do the first data transformation from ordinal data into a numerical scale. Or even do not perform the transformation beforehand, they slim perform parametric analysis such as regression, assuming the data is numeric scale.

<strong> Trouble in questionnaire data processing? contact us at +6281321898008 </strong>

Data Analysis Method in Quantitative Research

Research method is different with research techniques although it is similar in term of word. Data analysis method refers to a more general approach and data analysis technique is part of data analysis method. Research methods broadly divided into two parts: quantitative and qualitative method. whereas in quantitative method has various analytic techniques such as correlation techniques, regression, comparative, descriptive and the like.

Methods of quantitative data analysis is a data processing approach through statistical or mathematical methods collected from primary or secondary data. The advantages of this method is more comprehensive.

Quantitative data analysis method consists of several analytic techniques such as:

1. Descriptive analysis, we describe the results of the data collected it is through statistical measures such as mean, median, mode, and standard deviation.

2. Comparative analysis, we compare the fruit phenomenon with other phenomena, or we compare the same phenomenon in different subject groups.

3. Correlation Analysis, we see the connection between a phenomena with other phenomena which in theory has not been proven.

4. Causality Analysis, we questioned again causality between several phenomena in theory actually has allegedly interplay.

Quantitative data analysis methods are more widely used in the field of exact science, economics, engineering, medicine. Although today many social research method is applied in exact sciences, economics, engineering, medicine. So the approach used methods often use a quantitative approach.

Methods of qualitative data analysis is a data processing approach in depth observations, interviews, literature data. The advantages of this method is in depth of study results.

Qualitative data analysis methods are more widely used in the fields of social sciences, law, sociology, politics. Although many social fields use quantitative methods. Qualitative methods provide advantages in terms of depth of analysis that is required in the social field. How can examine the culture of a particular ethnic group without a profound observation. How can explore the philosophical aspects of specific provisions in the law if no in-depth review. Obviously for such things necessary qualitative methods.

Qualitative data analysis method consists of a variety of analytical techniques such as:

1. Method of data analysis is necessary to organize qualitative data to be more organized. Why is this necessary? because as we all know that in qualitative research there is no such thing definite measurement, moreover a standardized scale such as in quantitative research.

2. The method of data analysis is necessary: ​​Coding of data needs to do, because the measures of data mostly in the form of verbal rather than in the form of numeric, the researchers need to do coding for homogenize some things that have the same meaning.

3. The method of data analysis is necessary: ​​Connect one concept with other concepts that may influence each other, the size of the relationship or influence can not be described by numbers.

4. method of data analysis is necessary to legitimate the results by comparing another concept that we-think contrary to the conclusions. How many other concepts that are contrary to the conclusions.