How to improve the experience of test takers

How do applicants react when they have to take a study aptitude test for their application? And what does it depend on whether they are satisfied with the test or not? What do we need to do if we want a test to be accepted by our applicants? We asked over 2,000 test takers on six different study aptitude tests from ITB about this. One of the tests was specific to a study programme, three tests were specific to certain fields of study (e.g. STEM programmes), and two tests were general ability tests unrelated to fields of study.

We investigated overall satisfaction, face validity, absence of strain and controllability as established acceptability dimensions for which there are already reference values from tests used for personnel selection. We also asked how well participants felt informed in advance, how well they could maintain concentration and how confident they were that their personal data would be handled well. We also wanted to know whether fees, test duration, degree of specificity for the subject of study and mode of administration (in test centres or from home with video surveillance, so-called “proctoring”) had an impact on the various aspects of acceptance.

Initially, we were pleased to see that all six study aptitude tests received predominantly positive or at least satisfactory ratings, which on average were better than the ratings we are familiar with from tests used in personnel selection. But what about the differences between the tests?

The more subject-specific a test was, the better the so-called face validity was rated. The two general tests in our sample received weaker ratings here than the more specific tests. The specific tests also had significantly better scores in overall evaluation, although this effect was smaller than for face validity. So we could conclude that applicants prefer specific tests to general ones.

Test duration also played a role: shorter tests were rated better, especially for absence of strain, but also for overall evaluation. Test takers also valued information and preparation opportunities positively. In terms of privacy, almost all participants gave positive ratings. For online-proctored tests, however, the corresponding ratings were somewhat weaker – “only” 90 percent gave positive ratings, and about 10 percent of participants were more critical here. Online-proctored tests were rated positively, but on-site testing in test centres scored better in the overall evaluation.

Contrary to our expectations, the amount of the fees to be paid had hardly any influence on acceptance. Applicants are thus quite willing to pay money for a good test. However, we suspect that the specifics of the respective selection situation also played a role, because the highest fees in our sample were at a private university that also charges tuition fees. Those who apply there presumably have no problem with a selection fee. Nevertheless, for reasons of social fairness, we advocate low test fees so that applicants with low incomes have equal opportunities.

The results provide valuable information for the design of aptitude tests and can help to improve the acceptance of such tests among applicants. The study was finally published in a scientific journal.

Denker, M., Schütte, C., Kersting, M., Weppert, D., & Stegt S. J. (2023). How can applicants’ reactions to scholastic aptitude tests be improved? A closer look at specific and general tests. Front. Educ. 7:931841. doi: 10.3389/feduc.2022.931841

More News: