Ronaldo Mota – Scientific Director of Digital Pages and member of the Brazilian Academy of Education
The National Student Performance Exam (Enade) is one of the evaluations that make up the National System for the Evaluation of Higher Education (Sinaes), created by Law no. 10,861, April 14, 2004. Sinaes is also composed by the Evaluation Process of Undergraduate Courses and Institutional Evaluation, which, together with Enade, form an evaluation tripod about the quality of courses and institutions of higher education in Brazil.
Enade’s objective has been to evaluate and monitor the learning process and academic performance of students in relation to the program content provided in the National Curriculum Guidelines (DCNs) of the respective undergraduate course, including also measuring their abilities to understand contemporary and connected themes to other areas of knowledge.
Both the Organization for Economic Cooperation and Development (OECD) Report, commissioned by the Brazilian Government to the OECD in 2018, and the Report of the Court of Auditors of the Union (TCU) on Sinaes (TC 010.471 / 2017-0) show the fragility of the Enade, as it is currently applied. They highlight the inability to adequately address the central learning elements and criticize the process for being too costly and with no clear counterpart to evidence reliable results.
In addition, its triennial application by area of knowledge, ended by establishing, as an almost general practice in the institutions, differentiated concerns between those classes that will do and those that will not take the exam. For the examining groups, the restricted view of preparing for the issues of higher incidence in previous years distorts the DCNs themselves. For those who do not take the exam, there is no evidence that the results obtained by those who do are being used for the continuous improvement of learning.
Similarly, there is no evidence to justify Enade being able to accurately portray something more solid about the adequacy of measured learning compared to what the learner was expected to know, the well-known Difference Index between Observed and Expected Performances (IDD).
An additional fact makes the diagnosis even more difficult because it is not part of the trainee’s academic record, he is highly sensitive to elements associated with his motivation regarding the exam itself. It is important to emphasize that, regarding the student’s motivation to perform this examination in an appropriate and responsible manner, it is essential that the institutions clarify the importance of the Student Performance Individual Report, an instrument accessible to the student, as provided for in Sinaes law, that he compares himself with the others in his own class, as well as his relative classification with the other graduates of the region and of the Country.
Enade, in fact, proposes to analyze comparative aspects between those who make the same edition of the exam, but it is inadequate to provide a comparative diagnosis between different editions. Even among the graduates of a specific edition, the processes of curve fitting result in insufficient information about how satisfactory the learning process was in absolute terms.
Even with all these difficulties, if we were training professionals for the last century, it would be justifiable to say that something relatively substantive is measured in terms of learning content, procedures and techniques associated with a particular profession. The drama is that the future, which is already underway, points out that there are elements as or more relevant than what is supposedly being measured, ie the ability to learn continuously and the learner’s attitudes in the face of complex problems.
It is, therefore, necessary to include the differentials that an undergraduate course should give the learner, compared to the learner, about his own understanding of how he learns best and how he behaves in the accomplishment of missions.
When incorporating this approach, which corresponds to the state of the art in terms of learning theory, the most appropriate proposal would be for the exam to be revealing of the trainee’s intellectual maturity, with respect to items such as: i) ability to develop reasoning critical, logical and abstract; ii) mastering the scientific method, solving problems and missions; iii) ability to understand complex texts and to write in order to be clearly understood by others (sophisticated literacy); iv) substantive mathematical literacy, going beyond simple mathematical operations; v) ability to gather knowledge from different areas of knowledge, propitiating solve problems that require multidisciplinarity and multiple views; vi) understand processes involving simple programming, elements of modeling and simulation, ingredients inherent to contemporary themes; and (vi) demonstrating mastery of social-emotional skills associated with teamwork on complex issues.
Finally, a possible new Enade would measure the transverse ability to learn continuously and some behavioral aspects of the trainee, allowing the same test to be applied indistinctly to all areas of knowledge. From questions formulated based on the Theory of Response to the Item, we would provide a comparative between the relative performances between the results obtained by the trainees and the incoming ones, as well as evolutions of learning between classes of years subsequent or previous. In this way, we could represent more faithfully the learning capacity of the student, the potential of the contemporary professional that we form and the differentials of students’ performance, making an IDD more consistent than the one currently adopted.
Additionally, as relevant as the information and assessments of the trainee at the time of Enade, is the monitoring of the egress. From an accessible and attractive online platform, data on the level of professional success and personal satisfaction of the egress should be collected voluntarily, as well as an a posteriori perception of how much and in what aspects their training was adequate. These elements should be essential in the processes of evaluation and institutional self-assessment, as well as evaluation of each course.
