From the Editors: Experimentalnest designss in international business research 哪里可以下载中文版

CitationSee all >55 ReferencesRead full-text
21.44 · University of Minnesota Twin Cities27.49 · Simon Fraser UniversityAbstractThe question that motivated this editorial was, &where are the IB experiments?& The short answer is that experiments are largely absent from the IB and we argue that they shouldn't be. Experimental methods offer the opportunity to significantly improve the evidence for the causal relationships in international business research in a variety of ways. In this article we highlight the value and limitations of experiments in IB research, and explain the basic tenets of experimental design and thinking with the goal of encouraging the submission of more papers with an experimental design to JIBS.Discover the world's research11+ million members100+ million publications100k+ research projects
EDITORIALFrom the Editors: Experimental designs ininternational business researchMary Zellmer-Bruhn,Paula Caligiuri andDavid C. ThomasArea EditorsCorrespondence:Mary Zellmer-Bruhn, University of Minnesota,Carlson School of Management, 321 19thAvenue South, Minneapolis, MN 55406,Fax: 612.625.6.1316.email: zellm002@umn.eduAbstractThe question that motivated this editorial was, “where are the IB experiments?”The short answer is that experiments are largely absent from the IBand we argue that they shouldn’t be. Experimental methods offer the opportu-nity to significantly improve the evidence for the causal relationships ininternational business research in a variety of ways. In this article we highlightthe value and limitations of experiments in IB research, and explain the basictenets of experimental design and thinking with the goal of encouraging thesubmission of more papers with an experimental design to JIBS.Journal of International Business Studies (2016) 47, 399–407. doi:10.1057/jibs.2016.12Keywords:
internation research designThe goal of JIBS is to publish insightful, innovative and impactful research oninternational business.INTRODUCTIONThe opening quote, taken from the “JIBS statement of editorialpolicy,”sets high expectations for articles published in JIBS. Makinga theoretical contribution is central, and the evidence availablesupporting the causal relationship(s) proposed in the theory is asignificant component when judging the level of insight and impactthe study’s conclusions warrant. Causal ambiguity can be a signifi-cant limitation at best, and is often a fatal flaw in research studies.Evidence for causal relationships should be a concern for IB scholarsbecause our theories, and the empirical evidence supporting them,are used to advise founders, leaders, organizational members, andother stakeholders about policies, interventions, strategies andmore, that “have a profound impact on the lives and wellbeing ofpeople all over the world”(Rousseau, 2006; Rynes, Rousseau, &Barends, ). Offering such advice becomes uncomfortablewhen the evidence supporting any given theory is limited or suspect.In this article we point out the limited application of experimentaldesigns in IB research, highlight the value (and limitations) ofexperimental methods for IB research, provide a reminder ofthe basic tenets of experimental design, and encourage more papersusing experimental designs to be submitted to JIBS. The purpose ofJournal of International Business Studies (2016) 47, 399–407(C)2016 Academy of International Business All rights reserved www.jibs.net
this essay is to restate the fact that JIBS welcomes(and encourages) experimental research and rein-force the view that IB research could benefit frommore widespread application of experimental meth-ods, when possible, to evaluate internal validity of IBtheories. We draw on recent examples to illustratehow experimental methods can be used to makestrong theoretical contributions to IB, and we sug-gest that by thinking experimentally both JIBSauthors and reviewers can better evaluate the originsof constructs and the robustness of the causal rela-tionships in theory. Our goal is to highlight oneavenue to elevate the quality of evidence we accu-mulate for IB theories.Research Methods and EvidenceChoices about research methods have importantimplications for the accumulation of IB knowledge.There are many ways to accumulate strongevidence, for example through meta-analysis of largenumbers of high quality correlational research.However, because they provide the only unequivo-cal method for demonstrating causality, many con-sider random-assignment, controlled experimentsthe gold standard for evidence (Pfeffer & Sutton,2006). Controlled experiments isolate causal vari-ables and enable a strong test of the robustness of atheory: they provide convincing evidence for the-ories, especially when followed by field studies. Indescribing the usefulness of experimentation ineconomics research, Croson, Anand, and Agarwal() noted that experiments can “bedesigned to capture what researchers believe are therelevant dimensions of the field and to replicate theregularity in controlled conditions. Then, one byone, the real-world features can be eliminated (orrelaxed or changed) until the regularity observeddisappears or significantly weakens. This exerciseidentifies the cause (or causes) of the observationallyobserved regularity, and can result in theoryconstruction.”There is considerable debate about the usefulnessof ranking the quality of evidence in managementresearch (e.g., Learmonth & Harding, 2006; Morrell,2012). Our intention here is not to argue for the pre-eminence of one method over another, but to high-light the importance of method choice, and the needfor so-called triangulation of methods (McGrath,1982) to evaluate internal, external, construct, andstatistical conclusion validity of our work (Cook &Campbell, 1976). For readers without a deep ground-ing in experimental methods we summarize thebasics in Box 1.Experimentation in IB ResearchCompared with other management areas, experi-ments in IB, and correspondingly JIBS, are rare.A search of the past twenty years of empirical pub-lications in JIBS illustrates this point. Of 900 empiri-cal research articles published, a mere eight (or lessthan 1%) used an experimental design. In theseeight papers the topics ranged across marketing/advertising consumer behavior, sales communica-tion, venture capitalists’(VCs’) decision-making,cultural differences in decision-making, and empow-erment and job satisfaction. Furthermore, mostof these studies would be considered quasi-experiments rather than a true experiment becauseof their sampling approaches.There may be a number of reasons for the paucityof true experimental studies, including that muchof the research published on JIBS is multidisciplin-ary and examines macro, often long-term phenom-ena. Many researchers in IB phenomena cannotengage in random assignment into experimental or assigning countries to politicaleconomies, companies to globalization strategies,or country of origin to individuals would be impos-sibletosaytheleast.Incaseslikethese,wedonothave IB experiments because random assignment isnot possible. Yet while not right for all areas of IB,there are some IB theories and applications thatwould benefit from experimental designs, fromquasi-experimental designs, and from experimen-tal thinking.Another reason we may see few experiments pub-lished in JIBS is the nature of the samples used inmany experimental studies. In other disciplines,such as psychology, experiments are commonlyconducted using student samples, and there may bea false perception by authors that JIBS does notpublish research using student samples. For some IBtopics, such as the study of the cross-cultural percep-tions of advertising strategies or global careerchoices, student samples may be quite appropriate(Bello et al., 2009). The choice of sample aloneshould not discourage IB scholars from using theexperimental method. The real question should be,as with all other studies, whether the results foundfrom a given sample can generalize to the broaderpopulation?Beyond the issue of student samples, a key chal-lenge for IB scholars is recruiting participants fromvarying cultural and institutional backgrounds andconducting experiments with people from differentgeographic locations. For many questions, it may beimportant for participants to be located in theirExperimental designs in business research Mary Zellmer-Bruhn et al400Journal of International Business Studies
Box 1 Basics of Experiments and Quasi-ExperimentsBasics of Experiments and Quasi-experimentsExperimentation, offers strong tests of internal validity. Internal validity concerns causality (Cook & Campbell, 1976: Chapter 1) –orthe assessment of a cause and effect relationship between two variables. For causation to be determined there must be (1) truecovariation between two variables, (2) demonstration that the cause preceded the effect in time, and (3) alternative explanations musthave been ruled out (Sackett & Larson, 1990; Schwab, 2013). The ability of experiments to provide such strong inference (Platt, 1964)varies based on design and execution.The hallmarks of experiments are manipulation of independent variables or trials, which are potential causes, and control ofextraneous variables (Cook & Campbell, 1976: Chapter 1). Thus to be able to conduct an experimental test of a theory, the researchermust be able to control (manipulate) the level(s) of the independent variable under study, or the independent variable must vary dueto an exogenous event outside the control or influence of the cases studied. Manipulation (or exogeneity) is necessary to establishcovariation and demonstrate that the cause preceded the effect –that is, the dependent variable cannot be responsible for variation inthe independent variable. Control is required to establish covariation and rule out alternative explanations. True experiments involveindependent variable control through random assignment of cases to treatment conditions, and the true experiment is typicallyconsidered the only research method that can assess a cause and effect relationship.Random assignment controls for unobserved (extraneous) variables by equalizing treatment groups in order to rule out alternativeexplanations and enhance the determination of covariation between two variables. Researchers decide which cases are assigned to acondition using a random number table or generator, or roll of the die or comparable technique. Random assignment serves toaverage out the effect of nuisance variables across cases in a study. This means that because the choice to assign to a condition lacksany particular pattern (it is random), unmeasured variables should not be meaningfully correlated with the independent or treatmentvariable. It is particularly beneficial over other means of statistical control used in non-experimental work because researchers do notneed to measure the nuisance variables, and it controls such nuisance variables “whether or not the researchers are aware of them”(Schwab, 2013: 87). Random assignment becomes more effective at averaging out the effects of nuisance variables as sample sizesbecome larger.Limits of experimentationExperiments, even when involving manipulation of the treatment and random assignment to conditions, still remain vulnerable to avariety of threats to internal validity. For instance, despite random assignment across treatment conditions, it is possible that thegroups obtained had pre-existing differences (as noted previously, could result from random assignment with small sample sizes) on aquality that systematically altered the response of one group to the treatment as compared with the other. In this case, the observedXY relationship was spuriously generated by the unobserved difference between the groups. This problem, known as selection threat,can be evaluated (and ruled out) by implementing a research design with a pretest of the dependent variable. Likewise, there are timeswhen the lab setting results can be misleading. For instance, lab studies bring participants into an artificial setting and may seriouslyreduce realism and limit generalizability of the results (Meltzoff, 1998). Field experiments also involve the manipulation of one ormore independent variables but are conducted in a realistic or natural situation with conditions controlled as the situation willpermit. While the more realistic context of these types of experiments improves the external validity of the results, they have many ofthe same limitations of studies conducted in laboratory environments.Another limitation of all experiments is that the observed covariation between independent and dependent variables may bedisturbed by the research environment itself, such as demand characteristics or researcher expectations. It is beyond the scope of thiseditorial to describe the numerous experimental designs created to handle various threats, but when planning and designingexperiments extreme care must be taken to choose and correctly implement the design elements that best manage potential threats tointernal validity.Beyond limits to internal validity, experiments also face threats to external validity. External validity concerns the generalizability ofresearch results. The artificial qualities of an experiment often involve an unrealistic setting and results obtained in such a controlledand different environment may not accurately reflect the relationship in an organizational setting. While this critique may be validunder some circumstances, at least one study, utilizing data from many meta-analyses, compared the results of research done inlaboratory experiments with research done in field settings. The researchers discovered that the effect sizes from the laboratory studiescorrelated 0.73 with the effect sizes from the field studies (Anderson, Lindsay, & Bushman, 1999). These results suggest that theexternal validity problem may not always be as significant as some might however, as Colquitt noted (2008), thecorrespondence between results of field studies and experiments probably varies across research streams and theories. Experimentsmay not be well suited for studying “complex, multicomponent, nonlinear”phenomena (Rynes et al., ). To determine theappropriateness of an experimental design, scholars must think through the ways in which things like constructs may vary in thecontrived setting of the laboratory vs an organization, industry, or country, context, for example.There are practical realities in designing experiments. Not all independent variables can be randomly assigned to conditions. Forexample, if a researcher wanted to understand the effect of pre-departure training on a short-term international assignment for R&Dengineers being sent to help set up a new lab in another part of the world, the organization may be unwilling to randomly assignengineers to different training conditions. We can assign individuals, teams, units, and even organizations to conditions but wecannot assign countries to conditions. And, it may impossible, impractical, or unethical to withhold an independent variable. In suchcircumstances, researchers may be able to conduct a quasi-experiment.Quasi-experimentsQuasi-experiments (sometimes called natural experiments) are often thought of as a special case of field experiments. Like trueexperiments, quasi-experiments involve research where the independent variable (treatment) is not determined by or controlled bythe cases being studied. In a quasi-experiment, the independent variable might be (1) controlled by the experimenter, (2) be producedby an exogenous event, or (3) vary exogenously across groups. The key distinction between true experiments and quasi-experiments isthat in quasi-experiments, assignment to the treatment condition is not random. In quasi-experiments, researchers design methodsother than random assignment to improve evidence of internal validity (Schwab, 2013).Experimental designs in business research Mary Zellmer-Bruhn et al401Journal of International Business Studies
home context –for example, a researcher may wishto evaluate a question about differences betweenChinese participants and German participants, butthe expected effects may vary importantly if thestudy were conducted in the United States vs con-ducting the experiment with Chinese participants inChina and with Germans in Germany. The cognitiveand behavioral manifestations and effects of culturevary based on one’s location and associated con-textual cues (e.g., Chiu, Gelfand, Yamagishi,Shteynberg, & Wan, 2010, Hong, Morris, Chiu, &Benet-Martinez, 2000). For instance, the effects ofrequiring English as the common language of com-munication may look quite different if the experi-ment were conducted in the US vs with theparticipants located in their native language envir-onments (e.g., Ji, Zhang, & Nisbett, 2004). Whileparticipant recruiting and location remains a barrierto IB experiments, advances in technology, com-bined with creative study designs allow new ways toconduct international experiments. For example,subjects can be recruited globally, and experimentscan be conducted virtually –through synchronousinteraction (e.g., Jang, 2014). These developmentsallow experimental applications in areas previouslynot considered.Exemplars in IB researchWhile experimental designs are somewhat rare inIB research, and particularly in JIBS,therearesubfields within IB where experiments are morecommon, especially those examining individual-level or team-level outcomes. These areas are goodplaces to start in looking for how experiments canbe conducted on IB topics. Consider the followingexamples within international business subfieldswhere controlled experiments have made con-tributions that would have been impossible orwhere other methods would have provided weakerevidence.International marketing Marketing, and by exten-sion international marketing, has a long history ofexperimental research, including areas such as con-sumer behavior and advertising (e.g., Bazerman,2001; Peterson, Albaum, & Beltramini, 1985;Torelli, Monga, & Kaikati, 2012), decisions aboutretail store environments (e.g., Baker, Levy, &Grewal, 1992), sales and influence tactics (e.g.,Busch & Wilson, 1976; Griskevicius, Goldstein,Mortensen, Sundie, Cialdini, & Kenrick, 2009),and even partner satisfaction in marketing alliances(e.g., Shamdasani & Sheth, 1995). Of the fewexperimental studies published in JIBS, marketinghas been a key focus. For instance, in a 1999 JIBSarticle, Chanthika Pornpitakpan used an experi-mental design to conclude that Americans whoadapt to Thai and Japanese language and beha-vioral norms will have more positive sales out-comes when working in those national contexts(Pornpitakpan, 1999). More recently, Wan andcolleagues (Wan, Luk, & Chow, 2014) published apaper based on an experiment in six Chinese citiesto examine consumer responses to sexual advertis-ing and found support for their expectation thatmen’s responses to nudity in advertising were lessaffected by modernization than were women’sresponses. When addressing potentially sensitivetopics such as sexuality and arousal, norms forwhich are likely to vary importantly acrosscultures, survey methods may be biased (Hui &Triandis, 1985). Under such conditions, experi-mental manipulation may provide much clearerresults. Such potential for response bias is commonin IB research (Hui & Triandis, 1985). Furthermore,intersubjective theories of culture indicate thatpeople may not act in accordance with statedvalues or beliefs (Chiu et al., 2010; Hong et al.,2000). As a result, insights as found in Wan andcolleague’s work may not be possible without anexperimental design. Together these examplesillustrate experimental research published in JIBS,and rich tradition of experimental research inmarketing suggests the opportunity for more suchstudies being submitted to JIBS.International management Experimental researchis very common in many subfields of management.There is a particularly strong tradition of experi-mental research in cross-cultural management,cross-cultural psychology, and cross-cultural HRMwith many experimental studies conductedeach year. However, few of these studies are findingtheir way to JIBS. An example in internationalmanagement is an experiment published in JIBSidentified in our literature review. The paper(Zacharakis, McMullen, & Shepherd, 2007) used apolicy-capturing experiment involving 119 VCs inthree countries to examine the influence of eco-nomic institutions on VC decision-making. EachVC was provided with randomized access to 50ventures and evaluated them based on eight deci-sion factors. Results indicated cross-national differ-ences in the type of information emphasized ininvestment decisions.Experimental designs in business research Mary Zellmer-Bruhn et al402Journal of International Business Studies
In another example of experimental work in thisdomain, but one that did not appear in JIBS is byCaligiuri and Phillips (2003) who, in the context ofexpatriate candidates’decision-making on whetherto accept an international assignment, conducted atrue experiment, randomly assigning actual candi-dates for expatriate assignments to receive or nota self-assessment decision-making tool. Comparedwith the control group, participants receiving arealistic job preview (RJP) and self-assessment toolreported greater self-efficacy for success on theinternational assignment and had greater confi-dence in their decision to accept an internationalassignment. If this study had employed a correla-tional design, one which examined the level of useof the tool against the outcome measures, theycouldhaveonlyinferredtheinfluence of the toolitself. With that type of design, there was an alter-native explanation for the results: the expatriatecandidates with higher efficacy (in the first place)might have been those more likely to seek out anduse the tool. Random assignment and the pre-testallowed the researchers to disentangle baselineindividual differences and isolate causality: the RJPself-assessment tool caused the expatriates to havegreater efficacy and to increase confidence in theirability to make an effective decision about an inter-national assignment. We encourage internationalmanagement scholars to consider experimentalapproaches, and those who are conducting work ofthis type to consider JIBS as an outlet.International economics Economics scholars haveincreasingly embraced experimental methods andthis has extended to research in internationaleconomics (e.g., Hey, 1998; Roth, 1995). Again,however, little of his work has found its way toJIBS. Most applications of experiments within eco-nomics have been to test theories of individualchoice, game theory, and the organization andfunctioning of markets (Buchan, 2003). An exam-ple is a study by Roth and colleagues (Roth,Prasnikar, Okuno-Fujiwara, & Zamir, 1991) inwhich they conducted a multicountry comparisonof bargaining behavior. The authors comparedtwo-party bargaining games with multiparty bar-gaining games in four countries. Differencesobserved were determined to illustrate cross-coun-try differences in what is seen as an acceptableoffer. In a follow-up study, Buchan and her collea-gues (Buchan, Johnson, & Croson, 2006) manipu-lated the balance of power among players in thebargaining games, and found that power had amore differential influence in Japan than in the USin terms of offers made. These two examples illus-trate a history of experimentation in internationaleconomics, at least in the areas around individualchoice and game theory. Experimental approachesto cultural differences in ultimatum games haveproliferated enough to allow meta-analyses(Oosterbeek, Sloof, & Van De Kuilen, 2004).Finally, with respect to understanding markets,experimental international economics researchhas also been proposed as beneficial to test pro-posed interventional policies –in the hope thatconducting experiments can reveal unintendedconsequences of policy decisions (Camerer, 2003;Friedman and Sunder, 1994).Opportunities for Experimental Approaches in IBOne obvious opportunity for additional experi-mental studies in IB is to encourage more experi-mental work in the three domains identifiedabove. However, in keeping with improving theevidence for causal relationships more broadlyacross IB, there are several other opportunities toapply experimental approaches. These are the con-trol of possible alternative causes, designing long-itudinal field experiments, employing experimentsas part of a multiple methods approach, andthinking experimentally.Controlling nuisance variablesA number of experimental design methods can beused to control nuisance variables, or alternativecausal explanations. Two examples are matchingand identifying comparable groups of cases, thenvarying the treatment level across the groups.Matching equates cases on a number of dimensions(determined by theory). Done correctly, matchingon nuisance variables can strongly rule out alterna-tive causal explanations. For example, Earley (1989)matched an American sample to a harder to obtainsample from the P.R.C. on age, gender, job tenure,education, job duties, and career aspirations in astudy of the effect of social loafing across cultures.Matching can be effective but becomes difficult tocorrectly execute as the number of cases requiredincreases and/or as the number of nuisance variablesneeding control increases (Schwab, 2013). An alter-native method of control involves identifying com-parable groups of cases for each level of theindependent variable. A recent example is providedin the previously mentioned article by Wan et al.(2014) in JIBS, in which they test the effectof modernization on consumer responses toExperimental designs in business research Mary Zellmer-Bruhn et al403Journal of International Business Studies
advertising. In that study several Chinese cities atvarious stages of modernization serve as an indepen-dent variable, while the context of a single countrycontrolled for a wide range of societal-level variables.Similarly, Meyer and Peng (2005) suggest thatchanges in Central and Eastern Europe since the1990s provide unique societal quasi-experimentsthat offer an opportunity to test the applicability ofexisting theories and develop new ones in the areasof (1) orga (2) resource-basedtheories and (3) institutional theories.Longitudinal field experimentsLongitudinal field experiments are another quasi-experimental method that could work well ininternational business research. When pretest orbaselines levels of outcome measure are under-stood, the independent variable is introduced –consecutively –to comparable units. At each pointin time, change in the outcome is assessed. If theindependent variable affects the dependent vari-able only at the point in time when it is introducedto the group, a stronger cause-effect case can bemade.Thisdesignisdesirablebecauseorganiza-tions often run pilot programs in certain subsidi-aries or introduce practices across subsidiariesconsecutively (not introducing a new technologyplatform or training practice organization-wide).While desirable because it is naturally occurring,this longitudinal field design can present a pro-blem if the groups being compared are not similarfrom the onset or if some other concurrent occur-rences, such as currency fluctuation or a geopoli-tical crisis, affect the outcomes tested over time.Experiments and multiple methodsThe nature of international business questions veryoften means considering business problems in therich context of a multicounty or multicultural con-texts. Satisfying the competing desiderata of strongevidence in these contexts may require multiplemethods. As a part of a multi methods approach,experiments present the opportunity to make astronger case for internal validity for studies thatare set in a context that is rich in generalizability.For example, extracting the essential elements ofthe relationship between key variables in a correla-tional study and bringing them into a controlledenvironment can demonstrate the robustness ofthe relationship. That is, combining an experimentwith a survey offers evidence of both internal andexternal validity (Scandura & Williams, 2000). Forexample, Grinstein and Riefler’s (2015) JIBS articlereports the results of four studies, which combinecorrelational studies and experiments conductedin three countries, to show that high cosmopolitanconsumers demonstrate environmental concernand engage in sustainable behavior. When possi-ble, the results of an experimental design could becoupled with a correlational design. The formercould provide evidence of causality while the lattercould satisfy a threat to external validity.Thinking experimentallyPerhaps the approach with the most universalapplicability to IB is to think experimentally. Inevaluating research and reflecting on our ownresearch designs we can benefit from thinkingexperimentally, even if we are unable to imple-ment true experimental designs. Thinking experi-mentally involves, among other things, criticalthinking to rule out plausible alternatives, betterunderstanding of our theoretical constructs byconsidering the research context, and thoughtfuleffort to enhance conclusions about covariation,causal order, and alternative explanations throughresearch design.Thinking experimentally can help researchersbetter understand the nature of their constructs byseparating the function of the construct from thecontext in which it is embedded. The window insomeone’s living quarters provides an example.1Inan American farmhouse we might find severalsmall openings in the walls consisting of segmen-ted frames, while in a traditional native Americantepee there is an opening only at the top, and in amedieval castle there exists a number of very smalltrapezoidal orifices. It can be argued that each ofthese openings is unique, having developed withintheir specific context. The tepee opening is basi-cally a vent for fresh air, while the farmhousewindow also acts a viewing port, and the castlewindow serves both these functions, but isdesigned to be a defensive structure. However, ifwe focus on their function (letting in light, ventila-tion) we discover their similarity. By thinkingexperimentally we evaluate constructs in terms oftheir relationship to other constructs, while con-trolling for the effects of specificcontexts.Becauseinternational business research is typicallyembedded in different societal contexts it is essen-tial that the universal vs country or culture specificaspects of constructs are clearly specified, no matterwhat method is applied.Thinking experimentally also helps us to answerthe question of whether a study’s results are anExperimental designs in business research Mary Zellmer-Bruhn et al404Journal of International Business Studies
artifact of the sample or sampling approach andwhether the change in the dependent variable can-not be explained by other endogenous variablesbeyond those proposed in the theory (Reeb,Sakakibara, & Mahmood, 2012). In the peer reviewprocess, these are often labeled fatal flaws –some-thing other than the theory proposed is explainingthe results present in the data. An example providedby Reeb et al. (2012) shows how firm-level interna-tionalization affects corporate decision-making. Asthey note, an experimental design would assignfirms randomly to conditions such as multinationaland domestic, and then evaluate decision-making.In practice, firms in the two groups are not randomlydistributed across levels if internationalization,and the threat to causal conclusion about the rela-tionship between level of internationalization andcorporate decision-making is that both internationa-lization and decision-making could be driven by athird, unobserved cause. In such a case, we say thatinternationalization is endogenous. Another way tolook at this is to see the independent variable as a“non-random treatment.”The presence of endo-geneity creates inconsistent regression estimatesand biased test statistics (Woolridge, 2010). A con-sideration of approaches to address endogeneity arebeyond the scope of this article (see Reeb et al., 2012for more discussion); however, from the standpointof encouraging experimental thinking, it is impor-tant for scholars to both think through and applyavailable tests to evaluate the risks of endogeneity tocausal inference and statistical conclusion validity intheir studies.Experimental thinking can help in evaluatingevidence for cause and effect, and to reinforce designchoices and analytical approaches that allow stron-ger causal tests if true experiments are not possible.Analytical approaches such as identifying naturalexperiments (see Choudhury & Khanna, 2014 for arecent example of a natural experiment applicationin JIBS; Khanna, 2009) and instrumental variablesregression, regression discontinuity, or matchedsample designs can aid in such efforts. However, seeThomas, Cuervo-Cazurra, and Brannen, (2011) as acaution against substituting analytical crutches forcritical thinking.Case studies as natural experimentsFinally, experimental thinking can be applied inqualitative research. For instance, case studies, oftenregarded for their utility in inducing new theoryfrom empirical data (Eisenhardt, 1989), can be avehicle for experimental thinking. Yin (2013)suggests that case studies are best suited to addres-sing how and why questions because of their highdegree of internal validity. Welch, Piekkari, andPlakoyiannaki (2011) reiterate this position by sug-gesting the case study is a natural experiment. Con-sidering the multiple and complex causal links oftenrevealed in a case is an example of thinking experi-mentally even if the case itself is not used to explainthese relationships. For instance, Wong and Ellispublished a paper in JIBS (2002) in which they usedapplied an “experiment-like replication logic”toselect cases for inclusion.These examples offer some insight into howthinking experimentally can lead us to conceptual,design, and analytical approaches that improvethe internal validity of our research when trueexperiments are not possible. The need to thinkexperimentally extends beyond academe. Forexample, Davenport (2009) in Harvard BusinessReview entitled How to Design Smart Business Experi-ments highlights the many ways organizations areinvesting in training and software to conductexperiments before making strategic decisions inmarketing, advertising, operations, and the like.Across a range of organizations and functionalareas “these organizations are finding out whethersupposedly better ways of doing business are actu-ally better. Once they learn from their tests, theycan spread confirmed better practices throughouttheir business”(Davenport, 2009).Future of Experimentation in JIBSWe emphasize that JIBS does not privilege one typeof research (i.e., experiments) over others. In fact,our current editorial team has led the way to encou-rage more qualitative studies in IB and JIBS alike(Birkinshaw, Brannen, & Tung, 2011). As stated inthe JIBS editorial policy:JIBS is a methodologically pluralistic journal. Quantitative andqualitative research methodologies are both encouraged, as long asthe studies are methodologically rigorous. Conceptual and theory-development papers, empirical hypothesis-testing papers, and case-based studies are all welcome. Mathematical modeling papers arewelcome if the modeling is appropriate and the intuition explainedcarefully.This statement clearly indicates that experiments arewelcome at JIBS. The impact of such a statement isreduced, however, if potential contributors lookthrough the journal and see little evidence that thismethodological pluralism extends to experiments.One of the evaluation criteria applied for manu-scripts is the appropriateness for JIBS. We hope thatExperimental designs in business research Mary Zellmer-Bruhn et al405Journal of International Business Studies
our essay highlights that experiments are indeedappropriate for JIBS and dispels any view amongpotential contributors that there is a bias againstexperiments that may discourage them from submit-ting their research to the journal.Experiments are just one arrow in our quiver ofmethods, and any method chosen needs to beappropriate for the research question(s) pursued.No single method can address all the necessary ele-ments of validity (internal, constructs, external, sta-tistical conclusion). We encourage mixed methods.Ultimately, improving our justification for methodschosen, and description of the evidence and limita-tions produced by those methods will add valueto the IB research published in JIBS. We encouragemore experimental research, where appropriate.Experiments are notably underrepresented in JIBSand they offer an opportunity to improve the evi-dence for causal relationships in international busi-ness research.NOTE1This metaphor is borrowed from Earley andMosakowski, 1996.REFERENCESAnderson, C. A., Lindsay, J. J., & Bushman, B. J. 1999. Research inthe psychological laboratory truth or triviality? Current Direc-tions in Psychological Science, 8(1): 3–9.Baker, J., Levy, M., & Grewal, D. 1992. An experimental approachto making retail store environmental decisions. Journal ofRetailing, 68(4): 445.Bazerman, M. H. 2001. Consumer research for consumers.Journal of Consumer Research, 27(4): 499–504.Bello, D., Leung, K., Radebaugh, L., Tung, R. L., & VanWitteloostuijn, A. 2009. From the editors: Student samples ininternational business research. Journal of International BusinessStudies, 40(3): 361–364.Birkinshaw,J.,Brannen,M.Y.,&Tung,R.L.2011.Fromadistance and generalizable to up close and grounded:Reclaiming a place for qualitative methods in internationalbusiness research. Journal of International Business Studies,42(5): 573–581.Buchan, N. R. 2003. Experimental economic approaches tointernational marketing research. Handbook of research in inter-national marketing 190–208. Cheltenham: Edward ElgarPublishing.Buchan, N. R., Johnson, E. J., & Croson, R. T. 2006. Let’s getpersonal: An international examination of the influence ofcommunication, culture and social distance on other regardingpreferences. Journal of Economic Behavior & Organization, 60(3):373–398.Busch, P., & Wilson, D. T. 1976. An experimental analysis of asalesman’s expert and referent bases of social power in thebuyer–seller dyad. Journal of Marketing Research, 13(1): 3–11.Caligiuri, P. M., & Phillips, J. 2003. An application of self-assessment realistic job previews to expatriate assignments.International Journal of Human Resource Management, 14(7):1102–1116.Camerer, C. 2003. Behavioral game theory: Experiments in strategicinteraction. Princeton, NJ: Princeton University Press.Chiu, C. Y., Gelfand, M. J., Yamagishi, T., Shteynberg, G., & Wan, C.2010. Intersubjective culture the role of intersubjective percep-tions in cross-cultural research. Perspectives on PsychologicalScience, 5(4): 482–493.Choudhury, P., & Khanna, T. 2014. Toward resource indepen-dence –Why state-owned entities become multinationals: Anempirical study of India’s public R&D laboratories. Journal ofInternational Business Studies, 45(8): 943–960.Colquitt, Jason A. 2008. From the editors publishing laboratoryresearch in AMJ: A question of when, not if. Academy ofManagement Journal, 51(4): 616–620.Cook, T. D., & Campbell, D. T. 1976. The design and conduct ofquasi-experiments and true experiments in field settings. InM. D. Dunnette (Ed), Handbook of industrial and organizationalpsychology 223–336. Chicago, IL: Rand-McNally.Croson, R., Anand, J., & Agarwal, R. 2007. Using experiments incorporate strategy research. European Management Review,4(3): 173–181.Davenport, T. 2009. Make better decisions. Harvard BusinessReview, 87(11): 117–123.Earley, P. C. 1989. Social loafing and collectivism: A comparisonof the United States and the People’s Republic of China.Administrative Science Quarterly, 34(4): 565–581.Earley, P. C., & Mosakowski, E. 1996. Experimental internationalmanagement research. In B. J. Punnett, & O. Shenkar (Eds),Handbook of international management research 83–114.London: Blackwell Publishers.Eisenhardt, K. M. 1989. Building theories from case studyresearch. Academy of Management Review, 14(4): 532–550.Friedman, D., & Sunder, S. 1994. Experimental methods: A primerfor economists. Cambridge: Cambridge University Press.Grinstein, A., & Riefler, P. 2015. Citizens of the (green) world[quest] cosmopolitan orientation and sustainability. Journal ofInternational Business Studies, 46(6): 694–714.Griskevicius, V., Goldstein, N. J., Mortensen, C. R., Sundie, J. M.,Cialdini, R. B., & Kenrick, D. T. 2009. Fear and loving in LasVegas: Evolution, emotion, and persuasion. Journal of MarketingResearch, 46(3): 384–395.Hey, J. D. 1998. Experimental economics and deception: Acomment. Journal of Economic Psychology, 19(3): 397–401.Hong, Y. Y., Morris, M. W., Chiu, C. Y., & Benet-Martinez, V.2000. Multicultural minds: A dynamic constructivistapproach to culture and cognition. American Psychologist,55(7): 709.Hui, C. H., & Triandis, H. C. 1985. Measurement in cross-culturalpsychology a review and comparison of strategies. Journal ofCross-cultural Psychology, 16(2): 131–152.Jang, S. 2014. Bringing worlds together: Cultural brokerage inmulticultural teams. Doctoral Dissertation, Harvard University.Ji, L. J., Zhang, Z., & Nisbett, R. E. 2004. Is it culture or is itlanguage? Examination of language effects in cross-culturalresearch on categorization. Journal of Personality and SocialPsychology, 87(1): 57.Khanna, T. 2009. Learning from economic experiments in Chinaand India. The Academy of Management Perspectives, 23(2):36–43.Learmonth, M., & Harding, N. 2006. Evidence-basedmanagement: The very idea. Public Administration, 84(2):245–266.McGrath, J. 1982. Dilemmatics: The study of research choicesand dilemmas. In J. E. McGrath, J. Martin, & R. A. Kulka (Eds),Judgment calls in research 69–102. Newbury Park, CA: Sage.Meltzoff, J. 1998. Critical thinking about research: Psychology andrelated fields. Washington, DC: American PsychologicalAssociation.Experimental designs in business research Mary Zellmer-Bruhn et al406Journal of International Business Studies
Meyer,K.E.,&Peng,M.W.2005.Probingtheoreticallyintocentral and Eastern Europe: Transactions, resources, andinstitutions. Journal of International Business Studies, 36(6):600–621.Morrell, K. 2012. Evidence-based dialectics. Organization, 19(4):461–479.Oosterbeek, H., Sloof, R., & Van De Kuilen, G. 2004. Culturaldifferences in ultimatum game experiments: Evidence from ameta-analysis. Experimental Economics, 7(2): 171–188.Peterson, R. A., Albaum, G., & Beltramini, R. F. 1985. A meta-analysis of effect sizes in consumer behavior experiments.Journal of Consumer Research, 12(1): 97–103.Pfeffer, J., & Sutton, R. I. 2006. Hard facts, dangerous half-truths,and total nonsense: Profiting from evidence-based management.Cambridge, MA: Harvard Business Press.Platt, J. R. 1964. Strong inference. Science, 146(3642): 347–353.Pornpitakpan, C. 1999. The effects of cultural adaptation onbusiness relationships: Americans selling to Japanese andThais. Journal of International Business Studies, 30(2):317–337.Reeb, D., Sakakibara, M., & Mahmood, I. P. 2012. From theeditors: Endogeneity in international business research. Journalof International Business Studies, 43(3): 211–218.Roth, A. E., Prasnikar, V., Okuno-Fujiwara, M., & Zamir, S. 1991.Bargaining and market behavior in Jerusalem, Ljubljana, Pitts-burgh, and Tokyo: An experimental study. The AmericanEconomic Review, 81(5): 1068–1095.Roth, K. 1995. Managing international interdependence: CEOcharacteristics in a resource-based framework. Academy ofManagement Journal, 38(1): 200–231.Rousseau, D. M. 2006. Is there such a thing as “evidence-based management”?Academy of Management Review, 31(2):256–269.Rynes, S. L., Rousseau, D. M., & Barends, E. 2014. From theguest editors: Change the world: Teach evidence-based prac-tice!. Academy of Management Learning & Education, 13(3):305–321.Sackett, P. R., & Larson, Jr., J. R. 1990. Research strategies andtactics in industrial and organizational psychology.Scandura, T. A., & Williams, E. A. 2000. Research methodologyin management: Current practices, trends, and implicationsfor future research. Academy of Management Journal, 43(6):1248–1264.Schwab, D. P. 2013. Research methods for organizational studies.Abingdon: Psychology Press.Shamdasani, P. N., & Sheth, J. N. 1995. An experimentalapproach to investigating satisfaction and continuity in market-ing alliances. European Journal of Marketing, 29(4): 6–23.Thomas, D. C., Cuervo-Cazurra, A., & Brannen, M. Y. 2011. Fromthe editors: Explaining theoretical relationships in internationalbusiness research: Focusing on the arrows, NOT the boxes.Journal of International Business Studies, 42(9): 1073–1078.Torelli, C. J., Monga, A. B., & Kaikati, A. M. 2012. Doing poorly bydoing good: Corporate social responsibility and brand con-cepts. Journal of Consumer Research, 38(5): 948–963.Wan, W. W., Luk, C. L., & Chow, C. W. 2014. Consumer responsesto sexual advertising: The intersection of modernization, evolu-tion, and international marketing. Journal of International Busi-ness Studies, 45(6): 751–782.Welch, C., Piekkari, R., Plakoyiannaki, E., & Paavilainen-M?ntym?ki, E. 2011. Theorizing from case studies: Towards apluralist future for international business research. Journal ofInternational Business Studies, 42(5): 740–762.Wong, P. L. K., & Ellis, P. 2002. Social ties and partner identifica-tion in Sino–Hong Kong international joint ventures. Journal ofInternational Business Studies, 32(2): 267–289.Woolridge, J. 2010. Econometric analysis of cross section and paneldata, 2nd edn. Cambridge, MA: MIT Press.Yin, R. K. 2013. Case studies research: Design and methods.Thousand Oaks, CA: Sage.Zacharakis, A. L., McMullen, J. S., & Shepherd, D. A. 2007.Venture capitalists’decision policies across three countries: Aninstitutional theory perspective. Journal of International BusinessStudies, 38(5): 691–708.Experimental designs in business research Mary Zellmer-Bruhn et al407Journal of International Business Studies
ABSTRACT: The complex nature of international business research, with its cross-country and multilevel nature, complicates the empirical identification of relationships among theoretical constructs. The objective of this editorial is to provide guidance to help international business scholars navigate this complexity and ensure that readers can trust their findings. We provide suggestions for how to rule out alternative explanations, explaining key considerations not only in empirical analyses, but also in theory building and in research design. Our discussion covers both qualitative and quantitative studies, because we believe that it is imperative to understand how trustworthiness is established in both traditions, even for international business researchers who self-identify with only one. This enables scholars to have a broader scope of knowledge when interpreting past research in the field and to be more adept at explaining their design choices to a diverse audience.Article · Jul 2016 +1 more author ... Full-text · Article · May 2016
Full-text · Article · Dec 2016
Full-text · Chapter · Oct 2016 · Journal of the Brazilian Computer SocietyData provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.This publication is from a journal that may support self archiving.Last Updated: 23 Jun 16
oror log in with}

我要回帖

更多关于 waterfield designs 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信