Tuesday, May 5, 2009

Evaluation of online learning environment









Evaluation of online learning environment: The Course’s Name: “Noise Awareness Training”

Overview / background:
“NOISE AWARENESS’ is a training course available in ALISON free global experience website. It aims to help the organization be legally compliant with the Control of Noise at Work Regulations. It gives the managers and employees an improved awareness of the risks associated with noise, how the risks can be managed which will enable them to contend with the risk in the workplace.So, this technology is used to deliver and support online managers and employees

The online training course is aimed at all levels of employees who may undertake or plan work that may be carried out in buildings and premises or manage buildings and premises by following the Health and Safety Executive’s Guidance note.



The Proplem & Report questions:
Several training courses are available in online leaning environment; however we can find some of them are not effective and need to be developed to meet user’s needs specifically in effectiveness of the learning environment/course itself.
This report seeks to examine the effectiveness of the online learning environment of the “noise awareness” training course.It will answer the following question:
· Is the program goals and objectives are affected by strategy that enable the learner to use knowledge construction techniques?
· Is it used sutible technology (adaptable for handicapped users, uses appropriate plugins and bandwidth)?





1. Read the full copy of the report through visiting this link:

http://www.scribd.com/doc/14985176/Report





2. Visit the link of the survey:
http://www.surveyconsole.com/console/TakeSurvey?id=569218











3. Powerpoit Presentation:
http://www.slideshare.net/u066945/evaluation-of-online-learning-environment

Monday, May 4, 2009

Evaluation of CAI

A survey in Evaluating a computer-based Instruction


We have developed a survey(questionnaire) to evaluate the technical and pedagogical features of an instructional software then we post our instrument to Survey Console. This instrument has around (18) items,12 items in technical features and 8 items in pedagogical features.It has five rating scales,which are (Strongly agree, Agree, Undecided, Disagree, Strongly disageree).



You can visit this link too find the survey:

Proposal in Evaluation of Educational Project _ ACTION Model

A proposal to evaluate an instructional program using ACTIONS model


The project was developed specifically for grade five students in basic education schools in Oman. It was in the third unit of Math curriculum and consists of three topics from the unit of Superposition. Theses topics were: Triangle’s area, Superposition and Triangles’ types. The program was developed in English. It aimed to make the learner to be able to:
1. Measure the symmetric sides & angels of the super posited geometrics.
2.Classify the triangles according to their side and angels.
3. Calculate the triangles’ areas.


Here we will evaluate this program using ACTIONS model focusing on the: accessibility, interactivity and user-friendliness, teaching & learning and speed levels of this model.

1. Accessibility level:

In this level we will measure to which level this program is accessible for the learners and how flexible is it? We will use Assessing conformance to checklists/guidelines, including the use of automated checkers and Testing with assistive technologies ways to evaluate accessibility. ‘Assessing conformance to checklists/guidelines’, and, ‘Testing with assistive technologies’, are probably more appropriate for teachers, designers, and developers if they do not have easy access to users or experts. Through using the two previous ways, we will measure the flexibility level of accessing the program’s materials material and the ease of receiving feedback form the instructor.

2.Interactivity & user-friendliness level:

Navigatibility of the program, ease of using, branching and support strategies used in the program, all of these issues will be evaluated using a survey questionnaire to be filled out by the users. Then, the results will be analyzed both to show an example of the information available from the questionnaire and to validate the questionnaire as a relevant data-gathering mechanism. This questionnaire will be designed to answer the question of what kind of interaction does this program enable? And how easy is it to use and learn?


3. Teaching & learning level:

In this level they will measure kinds of learning, instructional approaches and the best technologies for supporting teaching and learning. Standardized ANUSET surveys allow to obtain comprehensive student feedback on teaching and learning at the end of semesters. These evaluation surveys are available in a variety of forms and are available in hard copy or online.
Moreover we can use ‘Monitoring and evaluating teaching by David Baume surveys’ in which we can get feedback from students in a class, get feedback from your students outside class and get feedback from the peers.

4.Speed level:

Here it will measure, how the quickly can courses be created and distributed with this technology? And how quickly can materials be updated?
We can do some observations for several of students to assess who they can quickly access by using the technology features.
From the following link you can see our powerpoint presentation in this article:




Sunday, May 3, 2009

Evaluation in educational technology

Educational Technology at Omani Higher Education Institutions
Dr. Ali Sharaf Al Musawi
Dr Hamoud Nasser Al Hashmi


This summary of the resaerch focuses on the evaluation methodology used, in terms of purpose and instruments.The purpose of this research was to address the current and prospective views on educational technology (ET) in order to discover the difficulties and develop its utilization in Omani higher education. It was designed to assess & evaluate the current status of ET and aimed at determining indicators which help to formulate a future strategic plan for Omani higher education ET. The need for such research has many reasons like to explore the extent to which ET services are utilized by Omani higher education institutions and to pave the way for researchers and decision makers to measure the cost-effectiveness of ET services for planning purposes. Several questions have been raised as: What are the current quantitative levels of technical and technological equipment/facilities? To what extent is the effectiveness of the current design, production and use of instructional software/equipment? The main instruments used to carry out this research were two questionnaires: the faculty members' questionnaire, and the technical/administrative staff questionnaire. They were developed by the researchers by generating a list of potential issues of ET derived from the literature and national level standardized surveys. In addition, in-depth interviews were conducted to verify some areas of the effectiveness of instructional software/equipment use brought up by faculty members in the questionnaire. The total sections are four for the faculty members' questionnaire incorporating: (1) demographics; (2) career development; (3) ability to use technology; and (4) training needs. The total sections of the technical/administrative staff questionnaire were the same in addition to one more section on quantities, budget, and staff issues. Then, the reliability coefficient was measured by alpha-Cronbach .The results was formed in the following levels: technical resources, human and financial resources, training, design and production, the use, correlation and the effectiveness levels.

Here you can see the powerpoint presentation of this summary:

Levels and techniques of evaluation in educational technology

The LMS moodle: A Usability Evaluation
_________________________
JAY MELTON
Prefectural University of Kumamoto


The Level:

This study is a preliminary one to determine if moodle’s registration system and assignment submission module have sufficient levels of usability in the study of English writing for Japanese science graduate students. Therefore the number of participants, four, is considered too small to make any sweeping claims. However, as Nielsen (1994) has written, a clear picture of a software package’s usability can be quickly determined with three to five users. So, the level of the evaluation this study was at curriculum level because it evaluate the module's registration system and assignment submission module levels of usability in the study of English writing for Japanese science (specific curriculum)


The Technique:

The participants were observed while performing a series of tasks designed to simulate the online submission of a homework assignment on the moodle LMS. The data for each task were collected by the evaluator writing notes on a worksheet mirroring each of the tasks. In addition, a videotape was made of each participant’s actions for review to ensure that all actions were recorded. In addition to the note-taking and the use of a video camera and tape to record the participants’ actions, Erikson and Simon’s “Think-Aloud” technique was employed (as cited in Preece et al., 2002, p. 365). In this technique, participants are encouraged to tell evaluators what they are doing and thinking about as they work through the various steps of an evaluation; this can help evaluators to have an idea what is going through participants’ minds during the test.



*Refrence:

Evaluation Strategies

The educational technology project: (Building a garden in third unit in Math for grade five).


In our computer based Instructional program, which called (Building a garden) we will use the observation as an instrument and an evaluation strategy to evaluate this educational project. That because observations offer critical insights into an instructor’s performance, complementing student ratings and other forms of evaluation to contribute to a fuller and more accurate representation of overall teaching quality. It will be appropriate to judge specific dimensions of the quality and effectiveness of the program, including achieving the goals, content, design and organization of the course, the methods and materials used in delivery, and evaluation of student work. This observation will be carried out for both summative and formative purposes. For summative evaluation, prior consensus will be reached about what constitutes quality teaching within the discipline, what the observers will be looking for, and the process for carrying out and recording the observations. A post-observation meeting with student and teachers allows an opportunity for constructive feedback and assistance in the development of a plan for improvement. We, as observers, will use the checklist to evaluate the project and it will consist and measure different aspect while users use and navigate through the program.

In the link below , you will find the checklist that we have taken from another source and we have adapted it for our project. It was aimed to evaluate the website and we have changed it to evaluate our educational project.

Comparative and non-comparative evaluation in educational technology

Non-Comparative Study:Perceptions of Online Learning Quality given Comfort with Technology, Motivation to Learn Technology Skills, Satisfaction, & Online Learning Experience

Michael C. Rodriguez and Ann Ooms
University of Minnesota
Marcel Montanez
New Mexico State University
Yelena L. Yan
University of Minnesota
April, 2005
Paper presented at the annual meeting
of the American Educational Research Association, Montreal, Canada.


A Comparative Study: Students’ Perceptions of Online Learning:
Karl L. Smart and James J. Cappel
Central Michigan University, Mount Pleasant, MI, USA
karl.smart@cmich.edu james.cappel@cmich.edu




The “Students’ Perceptions of Online Learning” is Learners’ perception and performance comparative study and “Perceptions of Online Learning Quality given Comfort with Technology, Motivation to Learn Technology Skills, Satisfaction, & Online Learning Experience” is non-comparative study.
The Students’ Perceptions of Online Learning study examines students’ perceptions of integrating online components in two undergraduate business courses where students completed online learning modules prior to class discussion. However, the Perceptions of Online Learning Quality given Comfort with Technology, Motivation to Learn Technology Skills, Satisfaction, & Online Learning Experience study examines audience comfort with technology, satisfaction with those experiences, and perceived quality, given their experience with online or hybrid courses.
The comparative study the overall sample was fairly evenly distributed by gender (54 percent females and 46 percent males). The participants consisted of 58 percent 4th year students (classified as seniors in the United States), 39 percent 3rd year students (juniors), and 4 percent 2nd year students (sophomores), and most (94 percent) were business majors. The data for this study are based on students’ experiences taking an online learning unit offered by the Information Technology Training Initiative (ITTI) of the Michigan Virtual University (MVU). Then the participants were distributed in two courses were developed by the university:“required course” was taken by third-year (junior) and fourth-year (senior) students and the “elective course” was taken by second-year (sophomore) and third year (junior) students’. The instructors were observing student’s participant team assignments and three group exams and a major group project.In the other hand, non-comparative study was conducted to approximately 700 professional and graduate education students. A survey was as an instrument; it was developed and reviewed by instructional technology experts and researchers, and piloted with a sample of the target audience prior to publication online.

The comparative research was not about assessing the difference between online vs. face-to-face instruction; rather, this research is more about how we can use technology effectively in the classroom and how students react to it. However, the most weakness in the comparative study was that this study compared between the use of integrated online units in both elective and required course, but this integration was done without taking in the consideration of the factors that can affect the results like: learner characteristics, course content, and the learning context. In addition, the largest dissatisfaction factor reported among the participants was the time required to complete the online modules.

The findings of the observation of the comparative one showed that most evaluation measures focus on subjects’ perceptions of the online units. However, the findings of the survey showed that for students with Online course experience, comfort with technology was related to satisfaction with online course experience which was related to perceived quality; motivation to learn more about technology was also related to satisfaction of online learning experience. For students with hybrid online course experience (WebCT), comfort was related to satisfaction and motivation; motivation was also related to satisfaction; and satisfaction was ultimately related to perceived quality. In this group (WebCT), comfort was also directly related to perceived quality. Finally, among students with no online course-related experiences, comfort was related to motivation to learn about technology, but neither of these factors was related to perceived quality of online courses. For students with limited or no online course experience, perceptions of online course quality were more difficult to explain.


PowerPoint Presentation: http://www.slideshare.net/u066945/compartive-and-non

Evaluation in educational technology

The effect of mobile phone screen size on video based learning
Nipan Maniar
University of Portsmouth, Portsmouth, United Kingdom
Email: nipan.maniar@port.ac.uk
Emily Bennett, Steve Hand and George Allan
University of Portsmouth, Portsmouth, United Kingdom
Email: {emily.bennett, steve.hand,
mailto:george.allan%7D@port.ac.uk


This research is the first step towards investigating whether the effect that screen size has constrained on video-based m-learning by addressing the :
Research Question 1: Does a learner’s subjective opinion of learning via video differ based on the screen size?
Research Question 2: Does a larger screen size result in a significantly higher amount of information learnt via video, compared with a smaller screen size?
To answer the research questions, three pilot experiments where conducted. The pilots show that there are three types of screen which are, ‘small screen’ is Nokia 6600 mobile telephone, screen size = 1.65 inches diagonal, video playback resolution = 128 x 96 pixels .‘medium screen’ is Motorola E1000 mobile telephone, screen size = 2.28 inches diagonal, video playback resolution = 320 x 240 pixels and ‘large screen’ is Compaq iPAQ H3800 PDA, screen size = 3.78 inches diagonal, video playback resolution = 320 x 240 pixels.
Moreover they define such features that phone may have in m-learning :
Q1. This form of communication could increase access to learning.
Q2. This form of communication could increase the quality of m-learning.
Q3. I wouldn’t mind carrying the mobile device
Q4. Watching the video on the mobile device was fun.
Q5. I would recommend ‘mobile learning’.
Q6. The screen was bright enough.
Q7. The screen size was large enough.
Q8. The overall picture quality was good enough.
Q9. The content of the video was clearly visible.

The result show that students who used 1.65 inches compare with students who used 3.78 and 2.28
inches screen, had a significantly lower subjective opinion of the screen quality and learnt a significantly lower amount. This may be because people tend always to pay more attention for the apparent in a larger screen display viewing. Moreover, the results of the
students who used the 3.78 and 2.28 inches screen were not significantly different from each other. However most students prefer use small screen size of handheld device for m-learning and fewer students own large-screen such as such as a PDA.This is due to the portability in using small screen size of handheld device.