penyeliaan-wahid

30
1 1 KEMAHIRAN PENYELIAAN & PENTAKSIRAN WAHID RAZZALY UNIVERSITI TUN HUSSEIN ONN MALAYSIA (UTHM) 5 June 2007 PTK3 2 PPPT Sijil Kemahiran 1 1 PPPT Sijil Kemahiran 2 2 PPPT Sijil Sijil Kemahiran 3 3 PPPT Diploma Diploma 4 PPPT Diploma Lanjutan (Ijazah Am) Diploma Lanjutan (Ijazah Am) 5 Sarjana Muda (Kepujian) Sijil & Diploma Siswazah 6 Sarjana Sijil & Diploma Pasca Siswazah 7 Doktoral Pasca Doktoral 8 SEKTOR PENGAJIAN TINGGI (UNIVERSITI) SEKTOR PENDIDIKAN SEPANJANG HAYAT SEKTOR TEKNIKAL & VOKASIONAL SEKTOR KEMAHIRAN TAHAP STRUKTUR M QF

Upload: yagami-r

Post on 07-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 1/30

1

1

KEMAHIRAN PENYELIAAN &PENTAKSIRAN

WAHID RAZZALY UNIVERSITI TUN HUSSEIN ONN MALAYSIA (UTHM)

5 June 2007 

PTK3

2

PPPTSiji l Kemahiran 11

PPPTSiji l Kemahiran 22

PPPTSiji lSiji l Kemahiran 33

PPPTDiplomaDiploma4

PPPTDiploma Lanjutan(Ijazah Am)

Diploma Lanjutan(Ijazah Am)

5

Sarjana Muda(Kepujian)

Siji l & DiplomaSiswazah6

SarjanaSiji l & DiplomaPasca Siswazah

7

DoktoralPasca Doktoral8

SEKTOR PENGAJIAN TINGGI

(UNIVERSITI)

SEKTOR PENDIDIKAN

SEPANJANG HAYAT

SEKTOR TEKNIKAL& VOKASIONAL

SEKTOR KEMAHIRAN

TAHAP

STRUKTUR M QF

Page 2: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 2/30

2

3   P  e  n  g   i   k   t   i  r  a   f  a  n   P  e  m   b  e   l  a   j  a  r  a  n   T  e  r   d  a   h  u   l  u   P   P   P   T

   /   A   P   E   L

DiplomaLanjutan

DiplomaLanjutan

Sijil 3Sijil 2Sijil 1

DiplomaKemahiran

Technical &VocationalCertificate

Diploma Teknikal& Vokasional

Sijil & Diploma PascaSiswazah

Postgraduate Professional 

Awards Fellow 

Master Craftsmanship 

STPM/ STAMMatrikulasi

Asas

Sarjana Muda(kepujian)

(3-5 Thn)

Ph.D &Kedoktoran

Sarjana Profesional(4 Thn)Sarjana:Penyelidikan,Kursus, Gabungan

SPM &Lain-lain

kelayakandiiktiraf

Sijil & DiplomaSiswazah

TAHAP & LALUAN PENDIDIKAN MQF

4

123

56

7

8

4

PENGETAHUAN

ILMU

BIDANG

KEBERTANGGUNGJAWABAN

SOSIAL

NILAI, SIKAP,

PROFESIONALISME

kemahiranKomunikasi &

berpasukan

Pengurusan

maklumat &

pembelajaran

sepanjang hayat

Kemahiran

mengurus &

keusahawanan

Penyelesaian

masalah &

penyelesaian

saintifik

KEMAHIRAN

PRAKTIKAL/

TEKNIKAL

Hasi l

Pembela jaran

Bidang &

Program

Page 3: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 3/30

3

5

6

Standard:Guidelines on Standard of Specific Disciplines at Bachelor Degree Level, Vol 1, Ministry of

Education, Malaysia, 2003.

Standard that is explicit but not rigid covering….

Educational Vision

General Educaional goals

Qualifications

Learnng outcomes

Programme design

Assessment

Entry criteria

Academic staff

Educational resources

Page 4: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 4/30

4

7

Educational Business

Do our activities contribute towards the development of effective Graduates?

outcomes

8

Quality Teaching …from Students’Perspective….from survey

1. Relates to real world applications

2. Teach at students’ level

3. Make learning fun

4. Concern for students

5. Enthusiastic

Page 5: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 5/30

5

9

10

Main / Current Issues

Lecturer’s Competencies

1. Content

2. Deliver

3. Assess

4. Evaluate

5. Reporting

Issues?

1. …………..

2. …………..

3. …………..

4. …………..

5. …………..

6. …………..

7. …………..

Page 6: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 6/30

6

11

Discussion Topics

• Effective Supervision

 – Practical / laboratory work

 – Field work

 – Studio work

 – Practical Training

 – Projek Sarjana Muda(PSM)

• Assessment

 – Concept

 – Planning andAdministration of test

 – Test / item Development

 – Coursework assessment

 – Statistical Application onassessment

 – Grading

 – Reporting of performance

12

Supervision Issues ?

• Your Issues?

1.

2.

3.

4.5.

6.

7.

Page 7: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 7/30

7

13

Effective Supervision

• Supervising(Degree)

• Guiding (Master)

• Advising (Doctorate)

14

Jenkins, M. G., "Standards and Codes in Mechanical EngineeringEducation: Confounding Constraints or Helpful Hindrances?,"Standardization News, Vol 27, No 9, pp 20-25, 1999

Page 8: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 8/30

8

15

Delivery Activities

• Practical / laboratory work

• Field work

• Studio work

• Projek SarjanaMuda (PSM)

• Practical Training

16

Assessment Issues?

• Your Issues?

1.

2.

3.

4.5.

6.

7.

Page 9: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 9/30

9

17

ASSESSMENT

• Definition of Test, Measurement,Assessment and Evaluation.

• To understand the various types andapproaches to assessment

• To understand the application of Bloom’s Taxonomy in constructingexams

• To understand the good practices ingrading

18

Understanding the conceptsAssessment:

Is the process of gathering information abouthow learners are progressing in their learning. Itgathers information about what learners knowand can demonstrate as a result of their learningprocesses

Nitko, A.J. (1996) Educational assessment of students, Englewood Cliffs: Prentice- 

Hall .

Examine the above definition and discuss the following:1. List the various ways in which lecturers gather information about learners progress in learning.

Which of these ways are more useful than others and why?2. What does the phrase ‘progressing in their learning’ mean to you?3. Is it sufficient to assess what pupils know and can demonstrate? (knowledge and skills only)4. Propose your own definition of assessment

Page 10: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 10/30

10

19

Measurement

Measurement refers to the process by whichattributes or dimensions of some physicalobject, process or opinion are determined.The process depends on the use of standardinstruments such as rulers, questionnaires,standardized tests etc.

In measurement we are not assessing anything. We are simply collectinginformation relative to some established rule or principle.

To measure is to apply a standard scale or measuring device to an object,

events, or conditions, according to practices accepted by those who areskilled in the use of the device or scale.

Kizlik B. (2003) measurement assessment and evaluation in education at www.adprima.com/measurement.htm 

20

Evaluation

Evaluation is a process that enables us to determinethe value of something. It allows us to makejudgments about a given situation. When weevaluate, we yield information regarding theworthiness, appropriateness, goodness, validity,legality of something for which a reliablemeasurement or assessment has been made.

It is the process of making judgment about the quality of a learner’sperformance using the information gathered during assessmentOgunniyi, M.B. (1991) Educational measurement and Evaluation, Singapore: Longman.

Page 11: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 11/30

11

21

Testing

Testing is just one of a number of strategies formeasurement. It is a process by which we canformally gather valid information about theperformance of pupils in given subjects. It comes inmany forms such as multiple choice testing, essaytesting, completion items testing, true false testing,etc.

Activity: Draw a concept map based on the need to distinguish theconcepts assessment, testing, measurement and evaluation

22

Testing

Measurement

Evaluation

Testing, Measurement, Evaluation = ASSESSMENT

Page 12: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 12/30

12

23

Testing, Measurement & Evaluation:Some Differences

Criteria Testing Measurement Evaluation

Definition MeasuringTool

Process inperformancelevel

Process inBehavioralchange andputting value

Purpose

Method

Time

Coverage

Result

24

Purposes of assessment• Diagnose learners strengths and needs• Provide feedback on teaching and learning• Provide basis for instructional placement• Inform and guide instruction• Communicate learning expectations• Motivate and focus learner attention and effort• Provide practice applying knowledge and skills

• Provide a basis for learner evaluation• Provide basis for evaluating programme

effectiveness

McTighe, J. and Ferrarra, S. (1994) Performance based assessment in the 

classroom, Pennsylvania, Educational leadership, 4-16.

Page 13: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 13/30

13

25

Forms of assessment

Formative:any assessment that is ongoing meant to improve learning andhelp direct the teaching learning process. It is sometimescalled continuous assessment

Discuss how teachers formatively assess their pupils indicating the potential barriers they often encounter. Suggest ways ofovercoming these barriers

Summative:happens at the end of a course or programme aimed atdetermining the effectiveness of the whole learning episode atits completion. The continuous assessment mark along withthe end of year, programme or course marks are oftenaggregated in some way to arrive at a decision about theeffectiveness of the entire learning episode

In what way is your teaching subject summative ly assessed? What issues about summative assessment in your subject arecurrently topical?

26

Assessment is effective when it…

• is student-centred

• is congruent with instructional objectives

• is relevant

• is comprehensive

• is clear in purpose, directions andexpectations

• is objective and fair

• simulates behaviour/product/performance

• incites active responses

• shows progress/development over time

Page 14: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 14/30

14

27

Types & Approaches to Assessment

There numerous terms used to describe different types andapproaches to learner assessment. Although somewhatarbitrary, it is useful to these terms as representing dichotomouspoles. (McAlpine, 2002)

Formative Summative

Formal

Final

Product

Informal

Continuous

Process

ConvergentDivergent

McAlpine, M. ‘Principles of assessment’, Glasgow: University of Glasgow, (2002)

28

Formative vs Summative Assessment

• Formative assessment is designed to assist the learningprocess by providing feedback to the learner, which can beused to identify strengths and weakness and hence improvefuture performance. Formative assessment is mostappropriate where the results are to be used internally bythose involved in the learning process (students, teachers,

curriculum developers).• Summative assessment is used primarily to make decisions

for grading or determine readiness for progression. Oftendone at the end of an educational activity and is designed to

 judge the learner’s overall performance. In addition toproviding the basis for grade assignment, summativeassessment is used to communicate students’ abilities toexternal stakeholders, e.g., administrators and employers.

Page 15: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 15/30

15

29

Informal vs Formal Assessment

• With informal assessment, the judgements are integratedwith other tasks, e.g., lecturer feedback on the answer to aquestion or preceptor feedback provided while performing abedside procedure. Informal assessment is most often usedto provide formative feedback. As such, it tends to be lessthreatening and thus less stressful to the student. However,informal feedback is prone to high subjectivity or bias.

• Formal assessment occurs when students are aware thatthe task that they are doing is for assessment purposes,e.g., a written examination. Most formal assessments alsoare summative in nature and thus tend to have greatermotivation impact and are associated with increased stress.Given their role in decision-making, formal assessmentsshould be held to higher standards of reliability and validitythan informal assessments.

30

Continuous vs Final Assessment

• Continuous assessment occurs throughout a learningexperience and is most appropriate when student and/orinstructor knowledge of progress or achievement is needed todetermine the subsequent progression or sequence of activities.Continuous assessment provides both students and teacherswith the information needed to improve teaching and learning in process . Obviously, continuous assessment involves increasedeffort for both teacher and student.

• Final assessment is that which takes place only at theend of a learning activity. It is most appropriate when learningcan only be assessed as a complete whole rather than asconstituent parts. Typically, final assessment is used forsummative decision-making. Obviously, due to its timing, finalassessment cannot be used for formative purposes.

Page 16: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 16/30

16

31

Process vs Product Assessment

• Process assessment focuses on the steps or proceduresunderlying a particular ability or task, e.g. the cognitive steps inperforming a mathematical operation. Because it provides moredetailed information, process assessment is most useful when astudent is learning a new skill and for providing formative feedbackto assist in improving performance.

• Product assessment focuses on evaluating the result oroutcome of a process. Using the above example, we would focus onthe answer to the math computation. Product assessment is mostappropriate for documenting proficiency or competency in a givenskill, i.e., for summative purposes. In general, product assessments

are easier to create than product assessments, requiring only aspecification of the attributes of the final product.

32

Divergent vs ConvergentAssessment

• Divergent assessments are those for which a range ofanswers or solutions might be considered correct. Examplesinclude essay tests, and solutions to the typical types ofindeterminate problems posed in PBL. Divergent assessmentstend to be more authentic and most appropriate in evaluatinghigher cognitive skills. However, these types of assessment areoften time consuming to evaluate and the resulting judgmentsoften exhibit poor reliability.

• A convergent assessment has only one correctresponse (per item). Objective test items are the best exampleand demonstrate the value of this approach in assessingknowledge. Obviously, convergent assessments are easier toevaluate or score than divergent assessments. Unfortunately, this“ease of use” often leads to their widespread application of thisapproach even when contrary to good assessment practices.

Page 17: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 17/30

17

33

Assessment vs Evaluation

• Depending on the authority or dictionary consulted,assessment and evaluation may be treated as synonymsor as distinctly different concepts. If a distinction exists, itprobably involves what is being measured and why andhow the measurements are made.

• In terms of what, it is often said that we assessstudents and we evaluate instruction. This distinctionderives from the use of evaluation research methods tomake judgements about the worth of educationalactivities. Moreover, it emphasizes an individual focus ofassessment, i.e., using information to help identify alearner's needs and document his or her progress toward

meeting goals.• In terms of why and how the measurements are made,

the table by Apple & Krumsieg, (1998) compares andcontrasts assessment and evaluation on several importantdimensions.

34

Assessment & Evaluation Compared

DIMENSION ASSESSMENT EVALUATION

Timing Formative Summative

Focus on Measurement Process-Oriented Product-Oriented

Relationship between

administrator and

recipient

Reflective Prescriptive

Findings and Uses Diagnostic Judgemental

Modifiability of Criteria, Measures

Flexible Fixed

Standards of 

Measurement

Absolute

(Individual)Comparative

Relation between

objects of 

assessment/evaluation

Cooperative Competitive

Apple, D.K. & Krumsieg, K. ‘Process education teaching institute handbook’, Pacific Crest (1998)

Page 18: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 18/30

18

35

Quotes on Tests & Exams

• “ Why do we do it? Tests aren’t fun to take and certainly not fun to grade, and sometimes it seems life would be a lot simpler if learning – and not tests and grades – were more important” (Speaking of Teaching, Stanford University Newsletter onTeaching, Fall 1992, Vol.4, No.1)

“… one of the greatest problems in institutional forms of learning is that students study for the tests and exams, instead of studying to grasp the object of learning and instead of studying for life…” (Bowden & Marton, The University of learning : Beyondqualityand competence in higher education, London, KoganPage, 1998)

36

Grades and Grading

• There are varieties of ways of grading, from setting up anabsolute standard to using a curve. Whatever modeladopted, make sure the students know in advance howthey will be evaluated. Grading policies should be speltout on the syllabus.

• Academic performance – mastery of knowledge andskills – should be the focus of grading.

• Encouraging an orientation towards learning rather thantowards grades will doubly assist the students – they willcomprehend and retain information better and continueto learn how to learn more efficiently and effectively.

Page 19: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 19/30

19

37

Assessment Methods

I. Direct Assessment Methods:

• Directly determine whether students havemastered the content of their academicprograms.

• Require students to display their knowledgeand skills as they respond to the instrumentitself (i.e. objective tests, essays,

presentation, and classroom assignments).

38

Assessment methods: Direct

Standardized exams Locally developed exams Oral exams Portfolios (work collected over

time) Performance appraisal Oral presentations Projects, demonstrations, case

studies, simulations Capstone experience

(embodied in capstone courses) Juried activities Evaluation of field work. Behavioral observations

Page 20: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 20/30

20

39

II.• Ask students to reflect on their learning,

what they have learned and experienced,rather than to demonstrate it (i.e. surveysand interviews).

• Details about instructional or curricular

strengths that can not be provided bydirect methods alone.

Assessment Methods: Indirect

40

Indirect assessment methods

 – Written surveys andquestionnaires

• Entering students

• Current students

• Graduating seniors

• Faculty

• Alumni

• Employers

• Parents

 – Exit interviews

 – Focus groups

 – External examiner

 – Archival records

Page 21: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 21/30

21

41

Planning and Managing test

• Determine the test outcomes – Determine the Learning Content to test – Develop Test Discriminatory Table – Develop Item / Question – Review of Item / Question

• Analysis of Item – Difficulty Index – Discrimination Index – Learning Taxonomy

• Administration of Testing – Preparatory Stage – Implementation Stage – Coordination Stage

42

Test development principles

• Validity – Generally content validity and the responsibility

of subject matter expert

• Reliability – The extent to which scores are consistent across

different scorers.

• Fairness – The extent to which score interpretations are valid

and reliable regardless of race, origin, gender,disability etc.

Page 22: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 22/30

22

43

Assessment of Projects

Mgt. &

Implementation

Of Project

Definition & Objective

Implementation• Coordination Committee

• Supervisor & Supervisory Guidelines

• Supervision (Log book, Seminar, Report)

Assessment• Seminar presentation

• Report

• Grading & Passing Criteria

44

Grading and ReportingWhy Grading? 

Grading Principles –  Easily understood by learners –  Well informed to learners –  Fair to all –  Support and strengten the learning proses –  As widely acceptable as possible

Norm Reference or Criterion Based Grading?

Score / % marks Grade Point

85-100 A 4.0

80-84 A- 3.7

Page 23: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 23/30

23

45

Statistical Application

• Assessment is the integration of both quantitative and qualitative data toprovide information on he nature of the learner, what is learned and how itis learned.

• Statistics Analysis –  Mean –  Median –  Mod –  Standard Deviation –  Variance

• Presentation –  Cummulative Frequency

 –  Histogram / Bar Chart –  Normalisation

• Making Decision

46

SummaryAcademically Speaking,

The Vision of University depends on the Mission depends on the Strategy depends on theObjectives depends on Measures depends on Initiatives

University Excellence

Graduate Excellence

Programme Excelence

Programme Design Excellence

Subject Design Excellence

Delivery Excellence

Assessment Excellence

“ Therefore everybody needs to be excellent in practising ‘assessment’ 

Page 24: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 24/30

24

47

Thank you

• These slides are gathered from a numberof sources.

• Of prominence are:

• Prof. Ir. Dr. Zainai Mohamed (UMK)

• Pr. Ir. Dr. Azlan Abd. Rahman (UTM)

• Mr. Richard James (Australia)

• UTHM, Buku Log PSM 

Page 25: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 25/30

Assessing Learning in Australian Universities Ideas, strategies and resources for quality in student assessmentwww.cshe.unimelb.edu.au/assessinglearning

Core principles of effective assessment

Enhancing learning by enhancing assessment

Assessment is a central element in the overall quality of teaching and learning in higher education.Well designed assessment sets clear expectations, establishes a reasonable workload (one that doesnot push students into rote reproductive approaches to study), and provides opportunities for studentsto self-monitor, rehearse, practise and receive feedback. Assessment is an integral component of acoherent educational experience.

The ideas and strategies in the Assessing Student Learning  resources support three interrelatedobjectives for quality in student assessment in higher education.

1. assessment that guides and encourages effective approachesto learning;

2. assessment that validly and reliably measures expectedlearning outcomes, in particular the higher-order learning thatcharacterises higher education; and

Three objectives for higher education

assessment

3. assessment and grading that defines and protects academicstandards.

The relationship between assessment practices and the overall quality of teaching and learning isoften underestimated, yet assessment requirements and the clarity of assessment criteria andstandards significantly influence the effectiveness of student learning. Carefully designed assessmentcontributes directly to the way students approach their study and therefore contributes indirectly, butpowerfully, to the quality of their learning.

For most students, assessment requirements literally define the curriculum. Assessment is therefore apotent strategic tool for educators with which to spell out the learning that will be rewarded and to

guide students into effective approaches to study. Equally, however, poorly designed assessment hasthe potential to hinder learning or stifle curriculum innovation.

Excerpt from James, R., McInnis, C. and Devlin, M. (2002) Assessing Learning in AustralianUniversities. This section was prepared by Richard James.

Page 26: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 26/30

Assessing Learning in Australian Universities Ideas, strategies and resources for quality in student assessmentwww.cshe.unimelb.edu.au/assessinglearning

16 indicators of effective assessment in higher education

A checklist for quality in student assessment

1) Assessment is treated by staff and students as an integral and prominent component of the entire

teaching and learning process rather than a final adjunct to it.

2) The multiple roles of assessment are recognised. The powerful motivating effect of assessment

requirements on students is understood and assessment tasks are designed to foster valued study

habits.

3) There is a faculty/departmental policy that guides individuals’ assessment practices. Subject

assessment is integrated into an overall plan for course assessment.

4) There is a clear alignment between expected learning outcomes, what is taught and learnt, and the

knowledge and skills assessed — there is a closed and coherent ‘curriculum loop’.

5) Assessment tasks assess the capacity to analyse and synthesis new information and concepts

rather than simply recall information previously presented.

6) A variety of assessment methods is employed so that the limitations of particular methods are

minimised.

7) Assessment tasks are designed to assess relevant generic skills as well as subject-specific

knowledge and skills.

8) There is a steady progression in the complexity and demands of assessment requirements in the

later years of courses.

9) There is provision for student choice in assessment tasks and weighting at certain times.

10) Student and staff workloads are considered in the scheduling and design of assessment tasks.

11) Excessive assessment is avoided. Assessment tasks are designed to sample student learning.

12) Assessment tasks are weighted to balance the developmental (‘formative’) and judgemental

(‘summative’) roles of assessment. Early low-stakes, low-weight assessment is used to provide

students with feedback.

13) Grades are calculated and reported on the basis of clearly articulated learning outcomes and

criteria for levels of achievement.

14) Students receive explanatory and diagnostic feedback as well as grades.

15) Assessment tasks are checked to ensure there are no inherent biases that may disadvantage

particular student groups.

16) Plagiarism is minimised through careful task design, explicit education and appropriate monitoring

of academic honesty.

Core Principles of Effective Assessment  2

Page 27: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 27/30

Assessing Learning in Australian Universities Ideas, strategies and resources for quality in student assessmentwww.cshe.unimelb.edu.au/assessinglearning

What students value in assessment

Unambiguous expectations Students study more effectively when they know what they are workingtowards. Students value, and expect, transparency in the way their knowledge will be assessed: theywish to see a clear relationship between lectures, tutorials, practical classes and subject resources,and what they are expected to demonstrate they know and can do. They are also wish to understandhow grades are determined and they expect timely feedback that 1) explains the grade they havereceived, 2) rewards their achievement, as appropriate, and 3) offers suggestions for how they canimprove.

‘Authentic’ tasks Students value assessment tasks they perceive to be ‘real’: assessment tasks thatpresent challenges to be taken seriously, not only for the grades at stake, but also for the nature of theknowledge and skills they are expected to demonstrate. Students value assessment tasks theybelieve mirror the skills needed in the workplace. Students are anxious to test themselves and tocompare their performance against others. Assessment tasks that students perceive to be trivial or 

superficial are less likely to evoke a strong commitment to study.

Choice and flexibility Many students express a strong preference for choices in the nature, weightingand timing of assessment tasks. This preference for ‘negotiated’ assessment is a logical extension of the trend towards offering students more flexible ways of studying and more choice in study options.Students who seek ‘more say’ in assessment often say they prefer to be assessed in ways that showtheir particular skills in the best light. They also argue they will study more effectively if they canarrange their timetables for submitting assessable work to suit their overall workload. Providing higher education students with options in assessment — in a carefully structured way — is worth consideringin many higher education courses though it is not a common practice. Encouraging students toengage with the curriculum expectations in this way should assist them in becoming moreautonomous and independent learners.

Core Principles of Effective Assessment  3

Page 28: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 28/30

Assessing Learning in Australian Universities Ideas, strategies and resources for quality in student assessmentwww.cshe.unimelb.edu.au/assessinglearning

Re-positioning the role of assessment

Capturing the full educational benefits of well-designed assessment requires many of the conventionalassumptions about assessment in higher education to be reconsidered.

For academic staff, assessment is often a final consideration in their planning of the curriculum. This isnot to imply staff underestimate or undervalue the role or importance of assessment, but assessmentis often considered once other curriculum decisions have been made. The primary concerns of academic staff are often with designing learning outcomes and planning teaching and learningactivities that will produce these outcomes. In contrast, students often work ‘backwards’ through thecurriculum, focusing first and foremost on how they will be assessed and what they will be required todemonstrate they have learned.

How academic staff viewteaching and learning

How students view teachingand learning

What course content should betaught?

What should students learn?

In what ways am I going to beassessed?

What do I need to know?

What teaching and learningmethods are appropriate?

How can student learning beassessed?

Re-positioning studentassessment as a strategic

tool for enhancing teachingand learning

What then are the learningobjectives?

What approaches to studyshould I adopt?

Assessment can be the finalconsideration for staff in thedesign of the teaching and

learning process

Assessment is usually at theforefront of students’

perception of the teachingand learning process

For teaching staff, recognising the potent effects of assessment requirements on student study habitsand capitalising on the capacity of assessment for creating preferred patterns of study is a powerfulmeans of reconceptualising the use of assessment.

But designing assessment to influence students’ patterns of study in positive ways can presentsignificant challenges. Assessment in higher education must serve a number of purposes. The overallcycle of student assessment (from the design and declaration of assessment tasks, to the evaluationand reporting of student achievement) must not only guide student approaches to study and providestudents with feedback on their progress, but also must determine their readiness to proceed to thenext level of study, judge their ‘fitness to practice’ and ultimately protect and guarantee academicstandards. These purposes are often loosely placed in two categories, developmental (‘formative’ —concerned with students’ ongoing educational progression) and judgmental (‘summative’ — where theemphasis is on making decisions on satisfactory completion or readiness to progress to the next levelof study). Both are legitimate purposes for assessment in higher education and effective assessmentprograms must be designed with both considerations in mind.

Core Principles of Effective Assessment  4

Page 29: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 29/30

Assessing Learning in Australian Universities Ideas, strategies and resources for quality in student assessmentwww.cshe.unimelb.edu.au/assessinglearning

A comparison of norm-referencing and

criterion-referencing methods for determining

student grades in higher education

The essential characteristic of norm-referencing  is that students are awarded their grades on thebasis of their ranking within a particular cohort. Norm-referencing involves fitting a ranked list of students’ ‘raw scores’ to a pre-determined distribution for awarding grades. Usually, grades are spreadto fit a ‘bell curve’ (a ‘normal distribution’ in statistical terminology), either by qualitative, informalrough-reckoning or by statistical techniques of varying complexity. For large student cohorts (such as

in senior secondary education), statistical moderation processes are used to adjust or standardisestudent scores to fit a normal distribution. This adjustment is necessary when comparability of scoresacross different subjects is required (such as when subject scores are added to create an aggregateENTER score for making university selection decisions).

Norm-referencing is based on the assumption that a roughly similar range of human performance canbe expected for any student group. There is a strong culture of norm-referencing in higher education.It is evident in many commonplace practices, such as the expectation that the mean of a cohort’sresults should be a fixed percentage year-in year-out (often this occurs when comparability acrosssubjects is needed for the award of prizes, for instance), or the policy of awarding first class honourssparingly to a set number of students, and so on.

In contrast, criterion-referencing , as the name implies, involves determining a student’s grade by

comparing his or her achievements with clearly stated criteria for learning outcomes and clearly statedstandards for particular levels of performance. Unlike norm-referencing, there is no pre-determinedgrade distribution to be generated and a student’s grades is in no way influenced by the performanceof others. Theoretically, all students within a particular cohort could receive very high (or very low)grades depending solely on the levels of individuals’ performances against the established criteria andstandards. The goal of criterion-referencing is to report student achievement against objectivereference points that are independent of the cohort being assessed. Criterion-referencing can lead tosimple pass-fail grading schema, such as in determining fitness-to-practice in professional fields.Criterion-referencing can also lead to reporting student achievement or progress on a series of keycriteria rather than as a single grade or percentage.

Which of these methods is preferable? Mostly, students’ grades in universities are decided on a mix of both methods, even though there may not be an explicit policy to do so. In fact, the two methods are

somewhat interdependent, more so than the brief explanations above might suggest. Logically, norm-referencing must rely on some initial criterion-referencing, since students’ ‘raw’ scores mustpresumably be determined in the first instance by assessors who have some objective criteria in mind.Criterion-referencing, on the other hand, appears more educationally defensible. But criterion-referencing may be very difficult, if not impossible, to implement in a pure form in many disciplines. It isnot always possible to be entirely objective and to comprehensively articulate criteria for learningoutcomes: some subjectivity in setting and interpreting levels of achievement is inevitable in higher education. This being the case, sometimes the best we can hope for is to compare individuals’achievements relative to their peers.

Excerpt from James, R., McInnis, C. and Devlin, M. (2002) Assessing Learning in AustralianUniversities. This section was prepared by Richard James.. 

Page 30: penyeliaan-wahid

8/3/2019 penyeliaan-wahid

http://slidepdf.com/reader/full/penyeliaan-wahid 30/30

Assessing Learning in Australian Universities Ideas, strategies and resources for quality in student assessmentwww.cshe.unimelb.edu.au/assessinglearning

Norm-referencing, on its own — and if strictly and narrowly implemented — is undoubtedly unfair. With

norm-referencing, a student’s grade depends – to some extent at least – not only on his or her level of achievement, but also on the achievement of other students. This might lead to obvious inequities if applied without thought to any other considerations. For example, a student who fails in one year maywell have passed in other years! The potential for unfairness of this kind is most likely in smaller student cohorts, where norm-referencing may force a spread of grades and exaggerate differences inachievement. Alternatively, norm-referencing might artificially compress the range of difference thatactually exists.

Criterion-referencing is worth aspiring towards. Criterion-referencing requires giving thought toexpected learning outcomes, it is transparent for students, and the grades derived should bedefensible in reasonably objective terms – students should be able to trace their grades to thespecifics of their performance on set tasks. Criterion-referencing lays an important framework for 

student engagement with the learning process and its outcomes.

Recognising, however, that some degree of subjectivity is inevitable in higher education, it is alsoworthwhile to monitor grade distributions – in other words, to use a modest process of norm-referencing to watch the outcomes of a predominantly criterion-referenced grading model. In doing so,if it is believed too many students are receiving low grades, or too many students are receiving highgrades, or the distribution is in some way oddly spread, then this might suggest something is amissand the assessment process needs looking at. There may be, for instance, a problem with the overalldegree of difficulty of the assessment tasks (for example, not enough challenging examinationquestions, or too few, or assignment tasks that fail to discriminate between students with differinglevels of knowledge and skills). There might also be inconsistencies in the way different assessors are judging student work.

Best practice in grading in higher education involves striking a balance between criterion-referencingand norm-referencing. This balance should be strongly oriented towards criterion-referencing as theprimary and dominant principle.

In summary:

1) begin with clear statements of expected learning outcomes and levels of achievement;

2) communicate these statements to students (they should be written so they make sense to

students);

3) measure student achievement as objectively as possible against these statements, and compute

results and grades transparently on this basis; and

4) keep an eye on the spread of grades or scores that are emerging to be alert to anything amiss in

assessment tasks and assessor interpretations.