Call for Software Proposals


Getting custom-built software for your research project, group, department, unit, can be very expensive. Fortunately, the Software Engineering senior class CS428/429 in the Computer Science department can write your software for free. Teams of 8 senior CS students work during the Spring 13 semester on the whole lifecycle of software, from requirements gathering, to design, coding, testing, and documentation. The students will deliver your software in 6 incremental milestones during the semester.

If you need software, describe it in a couple of paragraphs, and provide contact info. Students will be the ones who will ultimately decide which projects to work on, so be specific about the needs (e.g., if you need to use certain technology, make sure to list it).

Submission of new proposals is closed now. We have received over 80 proposals from the whole campus. Thank you Illinois!

You can find project proposals below

We need Online Monitoring and Control software for the SeaQuest experiment at Fermilab, using the most modern tools available. What this software does is regularly query a database that is fed every few seconds with the status of all the many detectors in the experiment (e.g. the high-voltage level on the hundreds of channels supplying HV to wire chambers and phototubes, gas pressures in tracking chambers, beam-related information passed to us from the accelerator group, etc). This mass of information needs to be displayed graphically so the shift crew can keep an eye on it during data taking, and it has to generate alerts to a watchdog process if a reading goes out of spec (to wake up the shift crew so they can go and fix the problem!) Finally, some control functions (interactivity) need to be included. Some examples: applying filters and cuts to the displayed data, changing plotting scales to zoom in on a particular time period, and an e-print function to send a snapshot of the displayed graph to our electronic logbook. That's the monitoring side; on the control side, we also need a GUI for sending certain instructions back to the hardware, specifically to adjust the high voltage settings on the many detector components. One final feature that this online monitoring code package has to provide is an easy way for users outside the software group to *add* new display modules on their own: if a new detector is added, or more commonly, if a different way of presenting some information is desired (e.g. histograms vs chart-recorder-style plots vs time.) This is best accomplished by using a well-known language, like perl, and by providing several examples / template codes that are easily modified to do whatever the user wants.


I would like this written using the most modern tools available, and my postdoc and I have tentatively settled on a browser-based solution for maximum accessibility and cross-platform compatibility. Individuals responsible for a detector subsystem often like to monitor it from home or on the road, from their own machines which can be running anything from UNIX-X11 to OSX to Windows ... and now mobiles.


Summary:
- The project is to develop a graphical online-monitoring framework for the nuclear/particle physics experiment SeaQuest at Fermilab
- It requires knowledge of mySQL, perl (or similar language for CGI scripting), HTML5 / javascript (or some superior alternative we don't know about for in-browser graphics)


Contact: Professor Naomi C.R. Makins, (217) 333-7291, Loomis Laboratory of Physics

In summary, I need software that can be used to track the location of workers in the utility steam tunnels and other remote locations. We need to track the workers for health and safety reasons because the environment in many areas is less than ideal (hot, dark, humid, and restricted… i.e. generally unpleasant). Since wireless communications (radio/cell) are limited at this time due to the tunnel structure, we are using a call in call out process. The general consensus is that the process can be improved if the correct type of software is available. There is software on the market that rescue workers use to track their teams, but for our needs it seems to be overkill in some areas and weak in others.

Some preliminary thoughts about the software features…
1. Should be visual in nature. Base main screen off of the existing electronic tunnel entrance maps.
2. Needs timing function that allows monitor to keep track of how long a worker is in particular location. Heat stress requirements limit the length of time a person can spend in certain location.
3. Incorporate wireless technology? GPS? Pager system to notify alarms?
4. Incorporate tunnel security system? Tunnel entrance alarms, cameras in key locations, ???
5. Database with worker information (cell #, medical info related to heat stress, etc.) (Allow point and click)
6. Calendar with scheduled work information.
7. Some method of tracking contractors working in the tunnels.
8. Include process flowchart (ex. who to call when someone does not check in, etc.)
9. Include database for environmental/safety information. Track tunnel temperature, air flow, entrance condition, missing insulation, etc. This will be used to help perform heat stress calculation and aid in the design of tunnel environmental controls and other safety related projects.
10. Keep track of known problem areas such as leaking steam, standing water, dead end locations, etc.
11. Somehow interface with the existing F&S business management system?
12. Use barcode (via I-phone) to track when someone enters and exists the tunnel? Each entrance given a unique identifier. Person swipes barcode when they enter the tunnel. Leaves a brief note on why they are entering tunnel. Maybe include drop down menu with common tasks. Person swipes barcode at the exit point. Alarm system to flag when tunnel entrance opened without authorized person entering barcode information???

Contact: Chuck Kammin; F&S Utility Distribution Group; ckammin2 at illinois.edu

Good Computer Science research requires good experimental methods, but the current lack of tools is driving researchers to rely on ad-hoc scripts. This usually results in experimental methodologies whose rigorousness is unsatisfactory. For example, in experiments measuring running time, parallel speedups, or scalability, it is common that measurements are repeated too few times or run only once, the variance of such measurements is rarely mentioned, data is plotted as the average or the minimum running time value instead of showing quantiles and possibly eliminating outliers. This is particularly important in the benchmarking of parallel systems, where sources of randomness are more pronounced and less controllable. In my experience, powerful tools assisting researchers by automating dull tasks allow them to run many more experiments, manage the large amount of information improves the quality of research, can point to bugs would otherwise go unnotices (e.g., unexpectedly high variance).


The goal of this software will be to provide an extensible framework for managing such experiments. Below are some possible directions, but other ideas are welcome. The user should be able to express concisely many different compilation and execution configurations (e.g., [flag0][flag1, flag2][flag3, flag4] should produce 4 compilation configurations: flag0 flag1 flag3; flag0 flag1 flag4; flag0 flag2 flag3; flag0 flag2 flag4. The same goes for run-time flags [e.g., how many cores to use], different inputs, etc) By appending to these configurations a set of options for how many cores to use, the programmer can quickly describe large experiments with ease. The tool will compile and run all these configurations, multiple times, collecting data for each execution, optionally checking that the program executed correctly [the easiest way to get speedups is to have a buggy parallel program]. The tool should have the capability of saving and processing the experimental data in many ways (e.g., output .csv files, output gnuplot-friendly output & scripts, etc) and allow the user to write their own post-processing code and registering it with the tool. Automatic visualisation is also very helpful greatly speeding up the try-fix-retry iterative process of research. The user could describe which data to plot and the tool could produce those plots for quick inspection. Using this technique I was able to find that one of my run-time implementation was buggy as it resulted in an order of magnitude more variance than the other ones (while still producing correct results).


I believe that such a tool is badly needed, and not only for CS research. I think that if done in a flexible and extensible way, this tool could enjoy fast and widespread adoption!


P.S. Some other features of the tool could be:
* Exploiting multi-cores: more than one experiments can be run in parallel depending on the target machine, and how many cores each experiment requires. For example, when comparing to the sequential code, 10 copies of the code could be run simultaneously.
* User-specified actions and flow: in a current prototype there are only a fixed set of actions (e.g., compile with gcc, compile with cilk++, etc) and the tool has a fixed flow (compile, run, check correctness). All these could be extensible allowing maximum flexibility to users (we expect users of this tool to be comfortable with writing some code to extend the capabilities of the tool, e.g., running multiple loads at the same time such as a server and client code)
* GUI to create and edit configuration files (in a current prototype I am using manually edited XML files which is far from ideal)
* Nested configuration option-sets: often some compiler options only make sense in combination with other options (e.g., [noprefetch, [prefetchAlgorithm=1, prefetchAlgorithm=2, prefetchAlgorithm=3][prefetchBuff=2, prefetchBuff=4]]. Trying out different prefetch buffer sizes without prefetching is pointless and can create many useless configurations in large experiments without nested option sets)
* Stopping and resuming large experiments, or re-running specific subsets of a large experiment and merging the results (e.g., after finding and fixing a bug)
* Automatic Scaling for minimizing variance: often, experiments that run in less than a second, can exhibit very large variance (>15%). Often, the difference in running times is within that 15% to 20% range, making any conclusion on relative performance completely unfounded. This variance often comes from external sources (OS & network events). Increasing the running time (e.g., by running the entire benchmarked code in a loop or using larger inputs), can often amortize the effect of such events and minimize variance. The tool could automate this process.
* Automatic Scaling for parallelism: when benchmarking on a large multi-core (e.g., 24 cores) if the above method of looping was used to make runs last ~2secs on 24 cores, they may take 48secs on one core, even though a smaller number of iterations could be used when using fewer cores. This can be determined manually by trial and error, but automating it makes much more sense. This could reduce the execution time of long experiments by an order of magnitude.
* Using a database to archive and browse experimental results.
* Instead of the user setting a fixed number of executions per configuration, the tool could use statistical methods to decide when the measurements have converged and dynamically pick the number of repetitions based on the results as they come in.
* Instead of exhaustively executing each configuration of the state-space described by the user, the tool could also try to find the best configuration (e.g., the best combination of compiler flags) by different methods (simulated annealing, gradient descent, etc). However, this is getting into autotuner territory and is probably not a priority.

Contact: Alex Tzannes; Siebel Center 4114; atzannes at illinois.edu

I manage the Food Science and Advanced Biofuels Pilot Plants in the College of ACES. We have a growing program that works with faculty, grad students, and external companies to research, develop, and test new technologies and techniques in Food, Fiber, and Fuel processing. The plants are very large laboratories containing ever-increasing numbers of large equipment. This equipment varies greatly in capability, a couple examples are listed below.
1. Extruders that produce puffed products like breakfast cereals and cheeto-like products
2. Machines that pasteurize juices or milk and dehydrate to a powder
3. Full equipment lines to convert corn kernels or plant products to ethanol or other fuels
4. Large assortment of support and analysis equipment

I am leading an effort to make this equipment more available to researchers, and one aspect is displaying and organizing available equipment while keeping good records of usage, maintenance, scheduling, costs, and billing. An ideal software would contain these things:
1. SQL-type database with web-based front-end for general public and login protected front-end for plant employees
2. Graphically appealing front-end for users to peruse organized list of available equipment with pictures, usage costs, description, and schedule. Ideally, users would be able to inquire or request usage of equipment through a web-based form. The form would ask for required information including contact info, requested dates, payment methods, etc. Requests would be stored in database and send email notification to Plant Manager
3. Front-end for plant employees to view and eventually approve equipment rental. Data would be saved from initial customer online request to avoid transcription errors. Some thought would have to be put into securely storing UIUC CFOAPS or external company contact information. Payment from company would be by check, so no CC information would be stored or charged. Data should be compiled and emailed monthly for plant employees to transfer money appropriately from CFOAPS or send invoice to external companies.
4. Plant employees will also need to be able to add/edit equipment descriptions, pictures, etc. while being able to override schedules, costs, etc.
5. Plant employees should be able to track and enter equipment maintenance needs through software. Ideally, a maintenance schedule could be put in for each piece of equipment, tied to hours or number of uses, and then software could send out alert when a limit is reached and maintenance is needed. After maintenance is completed by employee, the alert on that piece of equipment would be dropped.
6. Logs should be available for usage, maintenance, and other discovered details as we discuss the project. Interesting data visualization would be a plus.
7. Web-based for easy access in multiple locations without requiring any local software installation. Good coding practices should be followed to allow for expansion of software to accommodate for future needs.

Contact: Brian Jacobson; Food Science & Human Nutrition; bjacobs3 at illinois.edu; (217) 300-5404

The software is basically a graph algorithm to look for paths between known metabolites. The vision is for the expert system to guide the user through developing complete biotechnological solutions for any number of bioprocesses. Following are a couple use-cases:
1) Engineer a bacterium to degrade a known environmental pollutant
2) Engineer a bacterium to synthesize a useful biochemical.

The software should:
a) Find available degradation pathways b) Produce an audit of best solutions
c) Provide guidance on implementing the best solution(s)

An MS student is currently improving the required graph algorithms.
A current working prototype is shown here: http://abe-bhaleraolab.age.uiuc.edu/strain_designer/start/

The possible student assignment would be to develop a state-of-the-art user experience, including dynamic behavior (AJAX goodness), appropriate action history (something like computable documents) and possibly visualization paradigms (using e.g. http://philogb.github.com/jit/ )

There is also some opportunity to help with developing the core graph-based algorithm, although an MS student will be working on it as well.

We are currently using Python / django / JS for the web application. NoSQL (Neo4J / CouchDB) would be welcome too if appropriate, and we would also like a RESTful interface at some point. We do have all the code hosted on our own Git server, so students will be able to use collaborative coding as well.

Contact: Prof. Kaustubh D. Bhalerao; Department of Agricultural and Biological Engineering; bhalerao at illinois.edu

We already have two examples of virtual labs; a chemistry lab for “lab safety” training, and a radiation lab to conduct (virtual) half life and shielding experiments. These 3D, interactive computer games-like models have been built in the Unity3D (www.unity3d.com) platform. We would like to extend these models and build a virtual, interactive model of a nuclear reactor lab. A 3D model of a reactor facility built in Unity3D, is already available. What we need is a generic set of capabilities that would allow a non-CS major to build the kind of functionalities that are generally needed in a virtual lab.

Though in general a lab would require significant amount of “setting up of the facility and establishing connections,” a reactor lab does not. In the development of a virtual reactor lab, capabilities needed are:
1. To be able to set switches to desired settings (toggles, knobs—either discrete values or continuous). By clicking on a knob, one would change the value of the parameter controlled by that know (for example; how much a valve is open).
2. Running a physics model using the values of parameters set by the operator, and calculating the values of phase variables such as power, temperature, flux, etc.
3. Display the value of the parameters set by the operator in display panels in various formats.
4. Also display the values of the quantities being calculated using the physics model (or measured in a real reactor), in display panels in various formats (needles, digital, bar chart, time chart, etc)
5. Record all values set by the operator as well as measured quantities as a function of time (for future analyses).

Some of this project would be similar to the needs mentioned in the sample example of a CS428 project, copied below. Major difference being the display will be on 2D surfaces in the 3D environment of an “interactive game” (rather than on 2D monitors).

Summary: Project would require familiarity with game-development platform (ex Unity3d). [A second level of functionality would be in the form of a “supervisor console” which would, mimicing a real control room, provide the ability to require a second level of approval (from the supervisor) before a particular operator-action is implemented. ]

Contact: Prof. Rizwan Uddin; 108 Talbot Lab; 244-4944; rizwan at illinois.edu

We would like to have a dual platform App that can serve three specific groups of our community; the students, the faculty/staff and the alumni.
For the students, we would like them to be able to register for courses, check news, check calendar, shop online store, check schedules, access their blog.
For the faculty/staff, accessing the faculty blog, student rosters, class schedules, faculty applicants, commenting on applicants, watch recorded and live faculty interviews, check events calendar.
For alumni, watch videos, make donations, check events, shop online store.

Contact: Prof. Dan Work; dbwork at illinois.edu

The College of Business uses a Windows-based classroom capture system. The classroom cameras are not controlled during classes, so the camera is fixed on a wide angle view of the room. We would like to have a filter that takes a high resolution (1080p) video feed from the camera, identify motion in frames then crop/resize the active area to a lower resolution. The project can be broken into several phases and levels of complexity.

· Create an appropriate algorithm to identify motion and virtual camera movements. This could be done in post processing and would not need to be real-time.
· Create a filter for real-time use. Would require optimization for speed and minimize processor time.
· Create a virtual capture device that could be used with Skype/Lync/Encoders, etc.
· Monitor feeds from multiple cameras and combine them into a single stream.

Although we currently use a Windows-based system we would be open to using alternative systems if they could be integrated into our workflow.

Contact: Steve Hess; shess at illinois.edu

I am a retired Illinois Natural History Survey (Prairie Research Institute, UI) scientist (entomologist) and was the Education/Outreach Coordinator. I’m also a free-lance photographer and have traveled extensively photographing the world’s habitats and organisms. I would like to propose the development of a series of interactive Apps on the biodiversity of Illinois, the U.S., and the world. These would entail programming skills using Flash (or other newer applications) that would allow users to tour some of the great biological regions of the planet, but also learn about local natural history, geology, and the status of life on this planet. I have developed extensive presentations, using MAC Keynote, that incorporate maps, still images, and video footage and these could be the basis for the interactive Apps. I also have an example of a Flash program that was developed for a visitor’s center that has been very popular with the audience.

Summary:
—The project is to develop a computer, smartphone, or tablet application, using existing materials contained in MAC Keynote presentations software, for use by the public or by various educational institutions.

Contact: Dr. Michael R. Jeffords; jeffords at illinois.edu

I work in the MUST-SIM earthquake engineering lab in Newmark. I have a tethered camera shooting application that needs to be replaced. We use 2-4 Nikon DSLR cameras connected to one Windows XP or 7 system using a USB hub. The application monitors a TCP link for trigger messages. When it receives a message it sends a fire command to all of the cameras and downloads the pictures.

The two major requirements are 1) That the application fires all of the cameras simultaneously. and 2) That the application notifies the researcher if a camera firing has failed.

The current software is written in visual basic (by me) and uses a COM library to talk to the cameras. There is a Nikon SDK that could also be used for this.

Let me describe the requirements from a customer perspective:
1. Our tests consist of slowly crushing steel and concrete structures in a controlled fashion. The tests consist of 2000- 5000 ramp and hold commands. During the hold period our cameras are triggered to fire by our test control application via 1-3 camera controller systems. Test can use anywhere from 5 to 15 cameras.
2. All of the features of the current application need to be preserved. This includes the firing of cameras from the GUI and a stand-alone mode of operation needed for test setup, small-scale testing, and EOT projects.
3. The current version of the application works for us in that it fires all of the cameras and downloads their photos within a 5-10 second window. This window is critical because camera firing is one of the main things that determine the length of our tests. The current application cannot report errors from the cameras within this time frame. The application instead displays an error message on the camera controller screen some time after the next step has started. The new application needs to be able to fire the cameras, download the pictures, and acknowledge success/failure to the test control application within the 5-10 second window (the faster the better).
4. Our tests use too many monitors. We need to have the pictures uploaded to a central location and a client that can be used to view the pictures as they come in. I have a Redhat server that can be used for this purpose. The camera controllers need to report errors and statuses to either the test control application or the central "picture server" so that we don't have to have someone monitoring their screens during a test. The client could be the fancy graphical application you are looking for.
5. Our test are very slow. It is difficult to see what is going on with the tests. The pictures could solve this problem if they could be combined into stop-motion movies that could be viewed during the test. We use the MATLAB image processing toolbox do this after the test. I believe there are linux tools that do this as well.
6. One of the first tasks that is done after the test is completed is to create a movie which contains stop-motion movies of the camera images synchronized with animated plots. The data for these plots is available on the server via a REST web-service that I am developing. Creating one of these movies automatically at the end of each test day would speed up the analysis that is done during testing. The data that could be plotted with the channels comes from a set of up to 200 different sensors. We could designate a predefined dataset for plotting but should also include a feature where the research can select additional data to plot.
7. All of our data needs to be uploaded to a national repository located at Purdue University. They have their own REST web-service and FTP service that is used for this purpose. The picture server should be able to upload the images to this repository.
8. A lot of our researchers view our tests remotely using up to 5 webcams. It would be good if the picture client could include these live feeds.

You are welcomed to visit our lab. We have a very visual setup. Writing software for this place makes my job a pleasant experience which your students can share.

Contact: Michael Erwin Bletzinger; mbletzin at illinois.edu

We are in need of a software package for my lab. We have two electrodermal activity (EDA) monitors basically bracelets that measure skin conductance and temperature and can send those signals wirelessly. However, there is currently no software to gather that wireless signal and annotate it in real time, which is what we need to analyze event-related activity, that is to analyze how a series of specific event in an experiment tend to alter the EDA response. Ideally, the software would also allow for event-related analysis: averaging of multiple eda responses to the same type of event in the experiment, perhaps even visualizing the event averages for each type of event and produce subject-level data averages per condition to be latter pooled and compared across subjects.

Contact: Prof. Alejandro Lleras; alejandrolleras at gmail.com

This project would be to use the OpenAjax Accessibility (OAA) evaluation library to build web page accessibility inspection extension for the Chrome browser. The design would be based on the OAA Accessibility Extension for Firefox and AInspector for Firebug. The OAA evaluation library is an open source javascript library developed to analyze web pages for accessibility features for people with disabilities. This project would be to build a user interface to use the OAA library to analyze web pages and display evaluation results. Part of the project would be to include visualization features so accessibility information could be displayed and highlighted inline similar to some of the visualization features of the WAVE toolbar. The Chrome extension user interface would be based on the current user interfaces of the Firefox/Firebug accessibility extensions. Team members or a representative of the team would be expected to meet and report progress on a weekly basis to review software architecture and user interface design for the Chrome extension. The team would also need to develop a web site to document the features of the Chrome extension and to use standard Javascript documentation techniques for the code developed. The Chrome extension would be developed under the Mozilla open source license.

References
OpenAjax Accessibility Evaluation Library
OAA Accessibility Extension
AInspector for Firebug
WAVE Toolbar for Firefox
Previous Student Project to Build an Accessibility Extension for Chrome

Jon Gunderson; Coordinator of Information Technology Accessibility; jongund at illinois.edu

The Asian American Cultural Center is in need of software to help us with our space reservation requests. The current system (http://studentaffairs.illinois.edu/diversity/aacc/space_reservations.html) we have is not meeting our needs as we have approximately 1,000 meetings and events a year at our center. In our office there are 4 people who process these space requests and respond to people requesting the space. This can be a difficult and confusing process tracking who responded to space requests, inputting the request into the calendar, confirming them, and making sure the rest of the team knows what was done and what still needs to be completed. We are looking for software that would streamline this process and would make it easier for people to request space. This software would also need to be easily embedded on our website and Facebook. Do you think this would be a feasible project for your CS students?

Mai-Lin Poon; Asian-American Cultural Center; mpoon at illinois.edu

Objective:

Design and build a graphical user interface to enhance our current data processing pipeline for the analysis of 16S Illumina DNA sequencing data.

Detailed Aims:

In the Biocomplexity and Host-Microbe Systems groups at the Institute for Genomic Biology (IGB), we extract representative DNA sequence fragments from bacterial communities (microbiomes) inhabiting various human and non-human primate organs. After sequencing, we have very large datasets, sometimes up to and above 1 million sequences, which require intense data quality checks and processing before they can be analyzed. Our current pipeline for data analysis was built using state-of-the-art bioinformatics tools developed at IGB, but is command line based and when problems arise, is somewhat difficult to troubleshoot without the appropriate programming knowledge. The end-users of the data pipeline are in general not as skilled in programming as they are in bioinformatics and genomic biology.


We would like to develop a GUI for this pipeline with enhanced error messaging. Ideally, the GUI would prompt users to enter multiple sequencing files, combine these files, and run all necessary data analysis steps. Occasionally, there may be errors in the data files, which impede analysis. In the current pipeline, these errors are sometimes missed and issues do not arise until some of the latter steps. Enhanced error messaging outlining the problem is desired at each step to assure that we are not processing bad files and wasting time. The current pipeline uses a variety of online programs such as MOTHUR and the RDP Classifier, as well as our proprietary tools that are also online. Details on all steps in the current Illumina data analysis pipeline can be found at http://tornado.igb.uiuc.edu/ and http://cocoa.igb.illinois.edu/tornado2/.


The student who undertakes this assignment need not have any prior knowledge of biology, but should be comfortable with GUI design, scripting languages such as Python, and should be willing to work with non-computer scientists in designing and implementing a practical research tool. Day-to-day work on this project will be overseen by a group of postdoctoral researchers at the IGB.


This project is part of a research effort to understand the microbes that live within us and determine our health, and is a necessary step in the quest to provide personalized medicine

Prof. Nigel Goldenfeld, Institute for Genomic Biology, nigel at uiuc.edu, 217-338-8027

Scope

In the Biophotonics Imaging Laboratory at the Beckman Institute for Advanced Science and Technology, we have developed a handheld optical imaging device for use in a primary care physician’s office during routine health exams (see figure). At several physicians’ requests, we integrated a small (5”) LCD screen on the back of the device so the physicians could see the tissue images without redirecting their attention from the patient to look at a computer screen. However, the current screen is not ideal and our collaborating physicians have asked for a screen that is less bulky and more versatile. We are now designing a handheld device that has a removable attachment for an iPod touch. The iPod will provide a small form factor, high quality screen, touch control capabilities, and versatility through the iPod’s “app” interface to customize additional functionality and usability. We are looking for software that will enable us to use the iPod to display images that have been captured and processed in a LabVIEW GUI on the host computer, as well as provide some basic control of the GUI.


Specific requirements

The goal of this project is to monitor the images generated on the host computer using the iPod. The transfer protocol must be fast enough to enable display of at least 15 fps, preferably 30 fps. There are several ways this goal may be accomplished, which provides some flexibility in design choices. Objective C must be used to write the iOS application; however, there is some freedom in the choice of data transfer methods, as well as the method by which the images are taken from the host. Wireless communication is preferred, but if a wired connection is required to achieve sufficient frame rates, then that would be an acceptable solution. In addition to the iOS Objective C programming, there will need to be a program that would exist on the PC for communication on the host side, which can likely utilize a DLL written in C or C++ and called from LabVIEW.


Along with the display of two image feeds on the iPod (one from a miniature color CCD camera and a second from our optical imaging system), we would also like the iOS app to have some functionality specific to our application. For instance, there are certain controls within the handheld device’s GUI that the physicians would like to access or change through the touchscreen of the iPod, rather than setting down the instrument and picking up the mouse to change these on the PC. Other features may be considered and would be helpful, such as (1) automatically resizing and changing the orientation (portrait/landscape) of the displayed images as the device (and thus the iPod) is turned, (2) accessing information from the accelerometer and gyroscope to provide more detailed information to the physician using the instrument.


In summary, we have a need for a software package that will allow an iOS device to retrieve images from a LabVIEW GUI running on a host PC and display them on the iOS device at a minimum of 15 fps. Additionally, we would like to be able to manipulate some basic controls from the iOS device and access data from the iOS device’s gyroscope and accelerometer to provide additional information that is useful to the physician. We can provide all required hardware. Our imaging system is currently located at the Beckman Institute, where it is being developed and tested on human subjects for imaging eyes, ears, and skin.

Prof. Stephen A. Boppart, Beckman Institute for Advanced Science and Technology, boppart at illinois.edu, (217) 244-7479

My group works with mutant mice and our focus is tying genetic and genomic profiles of the animals to disease-related histopathology. Therefore we work with several different data types, from DNA sequence to high-resolution pathology images, and currently have several different types of databases that we use to keep track of these things separately.


The challenge would be to devise software and hopefully, user-friendly interfaces, to track and integrate these different features. My group has in the past collaborated with or hired computer scientists to create tools for us, and already have a quite sophisticated database for storing, accessing, and viewing histopathology data. We also already have a very simple database for keeping track of mouse breeding, genotypes, tissues saved, etc. The challenge would be to integrate these (or improved/new ones) to provied a platform for synthesis of genetic, genomic and pathology data.


This kind of software could be extremely valuable to many types of research groups, including anyone working on model organisms but also, in the human clinical context, especially as human genetics/genomics is entering the clinical sphere now in such a forceful way. There might be some nice opportunities up the road with the Mayo Clinic/ UIUC collaboration. This is why the idea seemed to me like a good possible project for your class.

Prof. Lisa Stubbs, Institute for Genomic Biology, ljstubbs at illinois.edu, (217) 244-4000

Goal: To obtain real time measurements of subnanometer distances between two surfaces during molecular force measurements.


Background: The surface force apparatus quantifies the molecular forces between two materials as a function of their (subnanometer) separation. This approach is used to investigate the surface properties of a wide range of materials, at the molecular scale. Materials investigated with this instrumentation include, for example, biological membranes and polymer coatings. In the surface force experiment, we quantify the separation between the surfaces of two sample materials with a Fabry-Perot interferometer (Fig. 1A). The distance resolution is ±0.1nm. The surfaces are moved in and out of contact with a stepper motor (Fig. 1B). White light is directed perpendicular to the two surfaces, and the transmitted wavelengths, which depend on the surface separation, are separated with a spectrometer and projected onto a CCD array chip. The projected image appears as a series of regularly spaced, curved lines that correspond to different interference order. As the motor changes the distance between the surfaces (Fig. 1B) the transmitted wavelengths change (Fig. 1C). This is detected in from changes in the positions of the peaks projected onto the CCD camera (Fig. 1D). From the change in wavelength (pixel position on the chip, Figs. C and D), we determine the change in the surface separation within ±0.1nm. Many different research groups around the world use this instrumentation, so that the software developed in this project could be implemented by a large number of research teams in industry and at different universities.


Project Description: The goal of this project is to develop LabView software that will enable users to monitor the position of transmitted peaks (wavelengths) as a function of the motor readout (separation), during an experiment. First, a mouse click would trigger the acquisition of an initial CCD image and the corresponding motor readout. Then, the user defines the region of the CCD image (range of CCD pixels) to follow during the measurements (1, Fig 1D). This information will be used to analyze all images acquired during the measurements. After each image is acquired, the images will be analyzed to determine the positions of the peak maxima in image projected onto the CCD camera. This is done, by fitting the intensity vs position in the user-defined region of the CCD image (2, Fig. 1D). The calculated peak positions would be written to a file, with the corresponding motor reading (4). These data would be used to generate a plot of peak position versus motor readout in a window, during the experiment (3). This plot is used to monitor the progress of the experiment. At the end of a series of measurements after all of the images are acquired, the peak positions are then converted to wavelength (5), with a user-defined set of spectrometer calibration data (5). The peak wavelength and corresponding motor position are then written to an Excel or Excel compatible format (6) for further analysis.


An additional goal would be to modify the software to allow users to control the motor movement (step size and direction) from the computer, during the experiments.


Further details here

Prof. Deborah Leckband; Chemical and Biomolecular Engineeering; leckband at illinois.edu

I am an assistant professor in the College of Business who currently teaches three sections of a course each of which has 45 students. Class participation at the individual and team level is very important in the course. Ideally, I would have a teaching assistant to help me track classroom administrative activities. Since I have no TA, I have looked for technological solutions to help me out.


As a workaround, last year I created a rough seating chart organized by student names and teams in Powerpoint. I saved the Powerpoint file as a PDF and annotated the PDF during class using my iPad. This was an OK solution that helped me track attendance and participation, but I had to manually transfer my annotated PDF results into an Excel spreadsheet to eventually integrate with my grades. This did not solve the problem of connecting student names to faces.


In briefly speaking with other colleagues, I noticed a need for an app. Class management software is very expensive and has too many features that we don’t need. Therefore, an instructor classroom control center (or TA) app would greatly help us capture individual and team-based classroom events in real-time and reduce the time to connect student names to faces. I believe that the app needs three primary feature sets. First, the pre-class features must cover initial course setup where we can import class roster information (names and face shots) as well as a pre-class setup for modifications that occur from week to week. Second, the classroom management phase should provide graphical seating chart and individual/team event tracking capabilities with edit and read-only modes. The instructor should be able to undo and redo as one would normally expect with editing software. A text-based roster list mode would be helpful during the beginning of the semester before teams are formed and seats are assigned. Third, the post-class features must enable the class information to be saved in a screen capture and exported to a flat file format readable by popular desktop software applications such as Excel.


Summary:
- The project is to develop a graphical “teaching assistant” app that will facilitate classroom management activities for professors in small to medium-sized (e.g., 25-50 students) classrooms who need to track individual and team-based events
- The iPad is the primary target platform for this app

Prof. Philip Anderson, College of Business, philca at illinois.edu

The Department of Speech and Hearing Science offers classes in American Sign Language. These classes are very popular, and it is important that we are able to assess the need for classes at the different levels which we offer. It would help our admin staff tremendously if they could access a system that:
1. kept track of students registered in SHS222, SHS121, SHS221 and SHS321
2. ensured prerequisite compliance
3. could inform decisions about how many sections of each level of ASL to offer on a semester-to-semetser basis
Such a system would ideally:
4. interface with student registration databases (this may be problematic given UIUC regulations)
5. allow students to enter information such as their plans for taking subsequent levels of the class, whether or not they require the classes for foreign language requirements, and their year of study
6. employ some constraint satisfaction tools that would minimize 'gaps' where a student could not take ASL classes in consecutive semesters


Such a system would benefit the department by cutting down on admin time and costs, and help students by allowing us to minimize disruption to language study plans and ensure that a sufficient number of sections were offered each semester. The web-based interface should be straightforward and no more difficult to navigate than Banner, and also secure enough to prevent unauthorized access to the database.

Prof. Matt Dye; Department of Speech and Hearing Science; mdye at illinois.edu; (217) 244-2546

Overview

Students need spaces to study, and their needs and preferences are diverse. They might need certain kinds of technology on hand for a group project or a quiet and closed off space for practicing demonstrations or maybe just comfy seating for late night study sessions. Students also want to know that there are snacks and coffee available – just think about the Espresso Royale locations on and near campus – always buzzing with students. Many students find their favorite study spaces serendipitously or through their friends, but there is a better way.

Data (read: open hours, noise level, technology) about all available study spaces in University of Illinois libraries and other campus buildings and their unique furnishings could be collected and put into an application, allowing students to find the most ideal study space at any time of day. With this in mind, the University Library would like to propose that a team from the Software Engineering classes develop an interactive virtual tool that draws from a database of available study spaces and integrates the data with a campus map.


Background and Inspiration

The inspiration for this project came from a tool created by the University of Washington called "SpaceScout" (see: http://spacescout.uw.edu/). The software integrates study space data with Google Maps to allow students to find spaces that meet their needs across campus.


Some of the preliminary data has already been gathered, including documentation of all Illinois library study places and photos.


We know Illinois students will benefit from this tool. Just think – eliminating some stress of finding the perfect place to study and devoting more time to the work that needs to get done.

Emma Louise Clausen; University Library; elclaus2 at illinois.edu

This project involves the development and deployment of a presentation toolbox for use on a Samsung SUR40. The SUR40 is a 40-inch display/computer combination that is touch enabled through infrared sensors embedded within the screen. Thus it is able to recognize not only fingers but also real world objects. In addition it is able to recognize 55 simultaneous touch inputs, making it an ideal collaboration environment. This project envisions a series of tools that could be used by designers to organize and review graphic work in tiff, jpeg, or video formats for presentation and critique. These tools may include a set of guides and snaps for “pinning” work to the virtual presentation for alignment and comparison, a simple drawing tool for marking up the presentation and if time permits the linking of digital content created by users to physical objects. The project will also involve working out a system for easily access to the library of work from a data repository in which students can place their work through the School of Architecture server system.


The underlying software platform used by the SUR40 is C#, with Windows Presentation Foundation and .net used as well. The SUR40 hardware will be available for students to work with although the SDK also allows an emulation of the touch environment. Students will have the opportunity to work collaboratively with both Professor Stallmeyer from the School of Architecture in developing the solution as well as with a group of architecture graduate students who will provide feedback and testing.


This project is an opportunity to work with a group of highly engaged client who are very savvy about digital representations, work hands on with some interesting technology and within the burgeoning touch enabled realm.


For an interesting demo of the SUR40 capabilities see this.

Prof. John C. Stallmeyer; School of Architecture; stallmyr at illinois.edu

The proposed attendance management program would be extremely useful as an informational and management tool to prevent international students with F-1 visas from becoming "out of status" and at risk of begin deported due to attendance issues. This program could provide an important and needed service to a number of educational institutions.


You can find more information here

Janice Mouschovias, English Center USA, Director at EnglishCenterUSA.com

Ten or so years ago we obtained several grants from NASA to create a contribution to their Virtual Laboratory. Our component of the Virtual Laboratory was the Virtual Microscope, which gives interested individuals anywhere in the world the ability to download and explore large datasets of complex images and spectroscopic information, mimicking the capabilities of a number of sophisticated microscopes (such as a high-resolution scanning electron microscope [SEM] with elemental analysis and backscattered electron collection capability, among other options). Most of the details are laid out on the first page of the link provided above. As noted, this was easily a decade ago, and perhaps it is redundant to say that nearly every aspect of computing capability today has far surpassed our capabilities at the time. When we created our software and compiled the datasets that would feed Virtual Microscope, it was necessary to require our clients (the public; often science teachers and museums, and often in other countries) to download and install a separate program that would allow them to open the datasets (sometimes thousands of images, collected in long sessions, using software that was itself cumbersome to operate) and explore the samples.


Nowadays this is much simpler, of course, but we no longer have our own in-house software engineers. We did, however, run a proof of concept of this 2 years ago. It should no longer be necessary to download a program to explore our datasets, and in fact most of what we had worked so hard to put together years ago can now be done effortlessly on an iPhone, for example.


What we would like is fairly simple, but also perhaps fairly ambitious. It would be a full redesign of the Virtual Microscope software, bringing it well into the 21st Century and making it available on all common web-accessible platforms. The first phase would be designing an application that would automatically open and permit the exploration of any of the 90-plus available datasets. A second phase would be designing a collection application that would be made available to facilities with access to the microscopes and other analytical devices such as micro-computed tomography (x-ray microCT) systems, giving them the ability to upload and thus add to the datasets we make available for public use.

Scott Robinson; Beckman Institute; sjrobin at illinois.edu

The context for this project is a research program in cognitive aging. We typically administer a battery of cognitive and psychosocial (e.g., personality) assessments along with our experiments to probe adult age differences in particular cognitive mechanisms that might explain our experimental effects. Our goal for this project would be to have these assessments programmed into electronic tablets (iPads).


Many of these assessments are timed or self-paced tasks so that the program would need to output performance and/or timing data for each subject on each task. In addition to the set of tasks that would be administered for a single time in the lab, there are a small number of tasks that we would like to be able to administer multiple times to subjects who will take the iPads home, so that we can measure change over time. For the single-administration task, we would provide specific items for all tasks in files that can be input by the program. For the repeated administrations, the program would have to generate certain events randomly (e.g., strings of letters, words from a database shapes).


More details here

Prof. Elizabeth A. L. Stine-Morrow; Beckman Institute; eals at illinois.edu; (217) 244-2167

The Global Studies Program would like to create an interactive world map that illustrates student and alumni job and internship history. We envision this looking like a Google map, with pins representing locations around the world where students or alumni have worked/are working. If you hover your cursor over a pin, it could show several high level items:
- alum/student name
- organization
- job title/position
- label whether this was “job” or “internship”
- time at position (i.e. “2008-2010” or “2010 - present”)
- several key words that might describe the field/industry/thematic area/subject matter


We would like this to be possible in map view or sortable list view, each searchable by any of the above categories. A beta version should initially be accessible by students, staff and alums of Global Studies with username/password protection. For students: perhaps their Net ID and password make for easy access. For alumni: register to create username/password. As the Alumni Association would attest, we need this to be as accessible and compelling as possible for alums to take the time to register, so we would like to explore ease of access with your team.


Purpose:
- Students can use this map to more effectively identify and connect to other students and alumni for internship and career advice, introductions to experts, and introductions to potential employers for global internships and work.
- This new platform, if designed by Illinois CS students, would enable alumni to reconnect and engage with their former program through an attractive, engaging platform (that needless to say further impresses alumni with their alma mater’s tech capabilities).
- Alumni can also use this map to identify fellow alumni and students, with whom they can collaborate professionally. The platform will enable a vibrant ecosystem to incubate new partnerships between the university and external organizations, formally and informally. These are the seeds of a collaborative environment that would support many stakeholders – students (current and former), faculty, and administration.
- For staff this becomes a useful tool to use WITH students to facilitate building their networking skills and career navigation.
- Once brought to scale, and thanks to the CS Dept., this capability can help differentiate the Illinois Global Studies program as a thought leader among peer programs.

Mark Wehling; Global Studies; mwehling@illinois.edu; (217) 333-0178

This project is comprised of out three parts. You can find then here, here and here. Students may chose one, two or three of the mobile apps for their project.

Joseph Troy; University Library; jmtroy2 at illinois.edu

We need Machine Maintenance Software so that we can have a record of the repair history on each of our machines at Campus Recreation.
- Database to be hosted on Campus Recreation servers with authenticated access by several users, some with administrative privileges.
- Tracking Machine Location, PM’s and History of service calls
- Forecast and schedule PM’s
- Able to record notes of action taken and parts ordered, received and used on each service call
- Able to be indexed by Manufacturer, Model, Machine Service Number, Part Number and Location
- Parts Inventory showing dynamic, on-hand balance, location, cost usage per machine, and reorder trigger points.
- Using Microsoft Access or other suitable programs with the ability to directly print reports, and to import/export to/from Excel.
- Graphics User Interface suitable for use with our current PC environment and perhaps later also for use with tablets.

We are currently using CWorks, but have found the need for customization of some aspects of it because it doesn’t match our equipment maintenance needs. Perhaps a fresh approach with new software would accomplish this. I would be willing to explain what we have been using and the improvements we would like in the new software.
The Campus Recreation IT Department will be able to assist as needed depending on their availability. Any developed system would need to be approved by the Campus Recreation IT Department prior to implementation.


Summary:
- The project is to develop Machine Maintenance Software for use in the Maintenance Department at Campus Recreation.

David P. Jones; Equipment Service, Campus Recreation; dpjohnes2 at illinois.edu

The Midwestern Regional Climate Center (MRCC) at the Illinois State Water Survey / University of Illinois provides climate data, maps, and analysis tools to a wide variety of audiences. Various climate data resources are available on the MRCC website and offerings are continually updated and expanded based upon user and regional needs. The MRCC is interested in providing some of these resources through a mobile applications environment in order to stay current and competitive with the ever-evolving technology demands of its client base. While some of the products are relatively static links to files that may or may not be updated on a real-time basis, it would be intriguing to have a mobile application developed that could be more input driven, where the user could select what station they are interested in (perhaps from a map), select their time period of interest, and then what product they would like to see. Much of this selection / parameter interface has been developed for our online, subscriber-based non-mobile website called MRCC Applied Climate System (MACS; testing subscriptions can be offered free upon request if there is interest to explore it), however, transferring these products and/or product queries into a mobile application environment has yet to be attempted.


Summary:
- The project is to develop a mobile application that is based off of climate products already developed by the MRCC
- In addition to any mobile application skillset / language requirement, it would also be useful to have knowledge of MySQL, perl (or similar language), javascript, and web service call concepts for accessing data from databases based upon user input parameters
- The climate science, product concept for non-mobile devices, and programs/scripts to process and deliver the product requests have already been developed in most cases, so the challenge would be transferring these products into a very user-friendly, mobile application environment

Mike Timlin, Illinois State Water Survey, mtimlin at illinois.edu, (217) 333-8506

Scientific researchers and communities regularly aggregate data from multiple disparate sources like NOAA, USGS, Corp of Engineers, EPA, Departments of Natural Resources, Universities, etc... A typical user accesses data through the source’s web site with more sophisticated users accessing the site’s web services with scripts or code written for a particular project or purpose. As the data deluge continues to grow, a common approach and service is needed to simplify access to these disparate sources. This project will focus on creating a framework and service that provides its clients a standard interface to access data and a connection service that provides the appropriate connection mechanism to the target data source. Characteristics of this service should include:
• Common client interface(s) like RESTful services to enable access data from browsers, scripts and custom clients.
• An extensible data connection service to allow the addition of new data sources and protocols. Example protocols include: RESTful, http(s), ftp, sftp, telnet, SQL, SPARQL and others as identified by the community.
• The data services should be able to be defined as pass-through requests or data aggregation requests on a per data type or per source request.
• Description data about the source, type, location, and other metadata elements should be saved and indexed for the target data sources.
• The service needs to be compatible with the existing open source project which is built upon Java, Scala and JavaScript.


This senior project will become a system component to a larger open source project being developed at NCSA for the EPA and SeaGrant agencies and will become part of the Phase II version of: www.greatlakesmonitoring.org. The team’s contributions will be credited in the Great Lakes Monitoring website.

Terry McLaren; National Center for Supercomputing Applications; tmclaren at illinois.edu

The proposal can be found here.

Surangi Punyasena; Department of Plant Biology; punyasena at illinois.edu

The Office of Privacy and Information Assurance has a project in need of development resources to produce a campus wide Security Dashboard. We are looking to create a web frontend that will act as a central point for various tools and services provided by the campus security office. The goal of the Security Dashboard is to defuse timely and relevant security information to all faculty, staff, students, and allied agencies on the UIUC campus.


The project requires the dashboard to integrate existing homegrown and commercial technologies currently deployed in our infrastructure. Beyond integrating existing technologies the project will require additional development of new technologies. When the project is completed the dashboard will provide information from our incident handling ticket system, system registration tools, endpoint management tools, vulnerability alerts, and metrics and reporting system.


The Security Dashboard will need to meet the following requirements:
• Must be written in Django or Python
• Utilize campus Shibboleth instance as authentication source
• Ingrate with Active Directory for role based authorization
• Undergo an application security scan and code review before production release
• Meet campus accessibility and usability requirements


The Security Dashboard is a large project and has many components; therefore the project can be broken into phases as needed in order complete in a single semester. Priority of what phase should be worked on first can be discussed during requirements gathering meeting if project is selected. If you are looking for a challenging project that involves working with a diverse set of technologies and stakeholders and one that offers a similar experience to code development demands of a large enterprise environment than this is the project for you. Additionally, if the project is a success and stakeholders are satisfied, there is a potential for employment opportunities while a student and after graduation.

Joe Barnes; Office of Privacy and Information Assurance; jdbarns1 at illinois.edu

Summary
We want to build a platform to run various social games as experiments into the co-ordination of swarms of robots. At least one game of the following type should be built as well. Each player will move a robot within a field of play and collaborate or compete with other players to achieve a task, such as:

  • Form a specified shape, like the U of I logo
  • Find a stationary or moving target
  • Evade all other players
  • Play capture-the-flag (http://en.wikipedia.org/wiki/Capture_the_flag)
Basic Project Description

The purpose of this platform is to use crowdsourcing to learn how to design and program swarms of autonomous robots. The wishlist of features is:
1. Realistic and settable configuration of robots. The experimenter can set the robots’ maximum speeds, fields-of-view, communication ranges, battery life, etc., or leave them as default. These properties may interact: moving faster may drain the battery quicker, or a robot can increase its field-of-view by communicating with nearby robots. These interactions should also be settable.
2. Real-time analytics. The experimenter can specify what kinds of feedback players receive in real-time. For example, a metric of how close the whole team is to achieving its task, or just the contribution of the robots-in-view to that metric.
3. Data logging. All data from each experiment should be logged in a database for later analysis.
4. Extensible. There should be a way to add more robot properties, more types of player feedback, and more games on top of the platform.
5. Automatic experimenter. The experiments can be generated and run online without human supervision.

Note that the graphics can be as simple as dots in a rectangular playing field, but the more fun the games the better. Students may consider writing this platform for Android.

Contact: Prof. Grace Xingxin Gao, Dept. of Aerospace Engineering, gracegao@illinois.edu

We develop VMD, a sophisticated molecular visualization and analysis tool used by researchers around the world. VMD incorporates scripts and plugins that give users the ability to create complex interactive demos, movies, and still image snapshots. VMD can also export molecular graphics to various 3-D scene file formats for use by software that does photorealistic rendering (e.g. POV-Ray, Renderman, Maya, etc), 3-D model printing, and for embedding models into PDFs, HTML5, and other document types. We would like the software engineering class to combine several of the VMD features above to produce a new VMD plugin that can automatically export existing interactive VMD demos for embedding into web pages, PowerPoint presentations, PDFs, or other document types using available technologies (e.g. HTML5, X3D / X3DOM, 3-D PDFs) and existing software tools, plugins, or libraries. An additional goal of this effort is to try to support viewing on a range of mobile devices (phones and tablets) and platforms (iOS, Android, Windows 8 RT). This effort would link VMD's current ability to export 3D scenes to web browsers with the demomaster plugin, allowing a complete demo to be exported to a web page, complete with buttons to push, enabling the audience to step through the demo. The web page format should work on desktops/laptops, but also be specialized for phones and tablets (iOS and Android) so that the audience of a demo could use their personal devices to review the demo material. VMD itself could act as a temporary webserver to enable impromptu sharing of the demo views and provide the audience with information on what button corresponds to the current state of the live demo, making it easy to jump to the current view for a closer look on the device. This has obvious applications for VMD-based instruction in a workshop or classroom setting as well as meetings with collaborators, or even to provide supplementary material for a poster. VMD Home Page: http://www.ks.uiuc.edu/Research/vmd/

Contact: Prof. John Stone, NIH Center for Macromolecular Modeling and Bioinformatics, johns@ks.uiuc.edu

Overview

Our group develops web-based workbenches for taxonomists, i.e. scientist who describe the worlds biodiversity (think intrepid explorers collecting shiny insects, studying them under microscopes back at the lab, and describing species previously unknown to science). We are in the early stages of planning our next generation workbench. Your team would develop a fully functioning "generic science application" that would act as a foundation for the long term development of our software. Your contribution would ultimately be used by a growing number of national and international scientists (currently over 20 labs), and would significantly influence our long term development.

Details

Your team will develop a "generic science workbench", an extensible (by us, future students, and other open-source contributors) web-application. At it's core it will allow for the creation of multiple "projects", each with multiple users. To this core we'll add data "modules". You will add at least one data module based on your interests and our needs. Depending on your capabilities and interest additional ones may be added during the course. Data are wide ranging and include mapping biological specimens, mobile-accessible field notebooks, integrating reference managing, and handling media libraries. We are, again, not asking for the full scope of needs to be built, but rather a single example that can be worked out between your team and us. The workbench will be deployable via a mechanism simple enough to be successfully used by technically competent but non-programming scientist (think Ruby Gem).

Tech

We want you to use the latest and greatest (e.g. alpha pre-releases are completely acceptable) software for Ruby/Rails stacks. This should include at minimum Rails 3 (Rails 4 welcome), using a PostGres or MySQL backend, to be deployed on Linux (Ubuntu, RHEL) and/or OS X servers. We welcome the use HTML5 and cutting edge JavaScript libraries in the stack, and would like jQuery to be used as the general purpose JS library. You'll provide unit tests for all of the code you write. You'll use Git repositories for source code. We'll point you at previously built Rails apps that you can draw data modules from if desired.

Interfaces

We don't expect you to provide even close to everything we're thinking about, but what is provided should stand alone. In addition to HTML interfaces we need to plan for our data to ultimately be available as jSON, XML, Linked Data, via the Rails Console, and via mobile devices. The Rails stack provides much of this for "free", but there is also opportunity to provide advancements in these areas. To ensure future flexibility we want to emphasize super simple interfaces (e.g. no integrated images for icons or links) that are customizable by CSS rules or additional code. There are lots of opportunities for innovation here. For example, we want our next generation workbench to be "smart", i.e. it should contain components that "adapt" to a users/groups previous actions. If we recognize that data are entered in a particular order then we might record this order and allow users to setup macro-like paths that can be triggered on subsequent usage.

Summary

This project represents a great opportunity for you to focus on building the components your team is most interested in within a fairly loose framework that we'll detail. Our team will have ample time to work with you throughout your project, particularly during the requirement gathering stage. We expect that Rails will allow you to quickly meet the basic requirements needed, at which time you'll be able to focus on truly innovative contributes to the application.

Contact: Dr. Matt Yoder, Biological Informatician, INHS. 'diapriid'on skype, 'diapriid@gmail.com', 217-244-9473

Have you ever wondered why the fire alarm system in your dorm, classroom building or any building on Campus went off and made you evacuate? Well, here is your chance to help develop a software program that will allow us to share that information with the Campus population.

Objective

The Office of Campus Code Compliance and Fire Safety (CCC&FS) is seeking the development of a web-based email program that will mate with an existing software program to share emergency response data through a mass email dump.

Background Info

Our Unit works hand in hand with both the Champaign Fire Department and Urbana Fire Department to comprehensively achieve a safe environment on Campus.

Every time the Fire Departments are called to respond to an emergency situation on Campus, they have Federal responsibilities to generate an official report for that incident. This incident report is intended to document what time they were called out, why they were called out, what actions they took when they were on the scene, what personnel was on the scene, and what time their personnel left the scene. Both Fire Departments share these reports with CCC&FS. The information in these reports is very valuable to our Campus. It is especially important to share this information with Facility Managers and Department Heads in the specific buildings where the emergency response was. This gives them full understanding of what went on in their respective buildings that they are responsible for. Once these people are informed, they will be able to email and inform the rest of the building what exactly happened.

The Fire Departments share these incident reports through software call Firehouse. This Firehouse program stores all the emergency response incident reports for Campus and categorizes them according to what building they were for.

The Firehouse program is the software that this newly created program would have to interface with. The new program would need to import the Incident Report from Firehouse and attach it to the specific buildings profile for where the emergency was located. Once all of the Incident Reports are imported and attached to the building profiles (which would include the building contacts and their email addresses), there would need to be a function to send out the emails with the reports attached.

Summary

We are seeking the development of a web-based email program that will mate with an existing software program to share emergency response data to campus contacts through a mass email dump.

This program will need to house a database of all our Campus buildings and their associated contact info.

This program will need to interface with our existing Firehouse software to extract the Incident Reports and then attach them to the appropriate building profile.

This program will then need to be able to populate an email function for the buildings that have recently had emergency calls with the attached incident reports and send them out via email.

Contact: Prof. Ryan Wild, rwild@illinois.edu

We are seeking to develop a portfolio repository for students and faculty in the visual arts fields that will serve to preserve contributed artworks, showcase artworks to parties external to the U of I (prospective employers, admissions councilors, galleries, etc), and allow participants to interact with each other through commenting and online exhibitions.

We would like for the front end of the software to be visually appealing, having a clean and streamlined feel to it, large graphics, and the ability to search by keyword or artist names, and sort for artworks by fields such as status (faculty, undergraduate, graduate), major, medium, and upload date.

The back end of the software would need to be as user friendly as possibly in order to accommodate various levels of computer literacy. Ideally, students and faculty would be able to log in with their netid/password, upload their media files (file formats supporting image, video, CAD, Adobe Illustrator and InDesign, and pdf formats), enter cataloging information for their works (each individual work as well as an overall statement of interest for each artist), and “publish” their works to be seen publicly on the software front end.

Other requirements include tie-ins with social media sites such as linkedin and vimeo, commenting and private message functionality, and the ability to create online exhibitions. It would be ideal if the comments and messages could only be seen by users who are logged in, and not by the public.

There are no specific technology requirements, but we would like to eventually be able to collaborate with the University of Illinois Chicago to make this a shared system and enable further collaboration.

Contact: Sarah Christensen, schrstn@illinois.edu

In congested traffic, drivers respond only to the motions of the cars immediately ahead. They often find it extremely difficult to maintain a steady speed. Rather, the slightest deceleration of one vehicle may force many following vehicles to slow down abruptly, and drivers often find themselves engaged in frequent acceleration-deceleration cycles—a phenomenon commonly referred to as the “stop-and-go traffic” or “traffic oscillations.” Traffic oscillation causes significant safety hazards, extra travel delay (or even gridlocks), extra fuel consumption, increased air pollution, and excessive driving discomfort.

We have proposed a mathematical framework to design effective oscillation mitigating strategies based on information provision and sharing across vehicles. We want to validate the effectiveness of the proposed information sharing strategies through driving experiments with human players in a simulated environment. Hence, we will develop a web-based “driving-in-queue” simulation environment, where the controller (i.e., who sets up and runs the game) controls the motion of the (slow-moving) leading vehicle and human players assume the roles of drivers in the following (queued) vehicles in a platoon in multiple lanes.

From a technical perspective, the software is a multi-threading server-client computer program. The user interface for each player will show the rear of its predecessor and the road in 3-D animations that reflect actual changes in spacing and speed. The user can choose to accelerate, decelerate, or change lanes based on the surrounding traffic conditions. The driving game differs from other simulation software (or video games such as the “Need for Speed”) by the introduction of communication channels, through which traffic conditions (such as the leading vehicle’s location and speed) can be optionally provided to the players. The software tracks all control records and vehicle movements in real time and save them for future analysis.

Contact: Prof. Yanfeng Ouyang yfouyang at illinois.edu

We are developing a low-cost solar photovoltaic platform for the developing world. Rather than instrumenting each solar installation with keypads and LCD displays to provide user interaction, we are leveraging the widespread adoption of cellular phones in these markets, virtually all of which have integrated bluetooth. As a proof of concept, we need to develop an Android app capable of communicating with our solar PV installation over bluetooth. The app will display things such as real-time and historic battery state of charge, and visual information about average power consumption. Additionally, the app will let the user set desired charge and discharge limits, as well as timing operation of when the battery should provide power (this is important for running things like pumps for irrigation).

The app must be coded with multi-language support from the get-go. Initially, we only need an English version, but it should be trivial to extend the idea to other languages. Furthermore, as much as possible of the information should be displayed in graphical form, with more pictures and less words.

Technically, the app will support the serial port profile SPP of bluetooth, and will communicate with a hardware bluetooth chip/microcontroller that is embedded in the solar PV system. A suitable API/protocol for sending the requisite information should be implemented on top of the bluetooth stack. We can provide bluetooth embedded hardware and a testing platform such as Nexus phone/tablet or an equivalent Android prototype platform.

In summary: -The project involves developing a multi-lingual Android app for a low-cost solar PV platform for the developing world.

-Familiarity with Android/Java is preferred, and some knowledge/interest in bluetooth communication.

Contact: Prof. Robert Pilawa-Podgurski pilawa@illinois.edu

SoyFACE at the University of Illinois at Urbana-Champaign is the largest and the oldest experiment of its kind currently operating in the world. FACE (Free Air Concentration Enrichment) is a powerful technique for investigating the interaction of climate and ecosystem. Using this technique, we elevate the concentration of CO2 and O3 in an open field via an array of sensors and controls which continually adjust the release of gas to achieve a precise concentration. In addition, we use infrared heating and rainfall interception to alter the canopy temperature and soil water availability. At SoyFACE, we conduct cutting-edge research in plant genetics, physiology, ecology, and climatology, with an eye on the challenges of feeding a growing population in a changing climate.

Our experiment is in need of talented and motivated students to take our data stream to the next level. Our large, interdisciplinary team needs a way to easily access, visualize and manipulate real-time data from our environmental sensors and experimental equipment, both on-site and remotely. The data include atmospheric gas concentrations, air and soil temperature, precipitation, wind, humidity, and parameters of our experimental equipment. We need a semi-automated quality assurance system before the data enters our database, and we need tools to help our researchers access and visualize historical data via our web site.

This project requires the interaction of a variety of technologies including Windows, Android, iOS, Windows Mobile, SQL, 802.11 networks, and web development. Student programmers would also have the opportunity to collaborate with staff programmers at the Institute for Genomic Biology, who are designing software for the embedded computers that run the experiment.

Contact: Puthuval, Kannan kputhuva@illinois.edu

More than 34% of Americans are obese, and it is becoming increasingly evident that limiting calories alone is insufficient for sustainable management of a healthy weight. Incorporating sufficient amount of fiber and protein in the diet decreases hunger, increases weight loss, and improves body composition. Thus, teaching individuals how to not only keep calories in a healthy range but also select foods high in protein and fiber will aid sustainable management of a healthy weight. We have developed a visual nutrition education tool that assists adequate intake of fiber and protein by two- dimensionally plotting fiber and protein content of foods together with a target for weight management (figure below). The system has a potential to be used by consumers to facilitate grocery shopping, by health practitioners to aid clients with personalized weight loss counseling, or by researchers to track dietary compliance in intervention studies. However, the system is currently limited by lack of software. Development of new software that can make our education tool simple to use for our various target end users is critical for the advancement of this innovative research.

We would like to have browser-based software developed that can be used by health professionals and by their clients. The software must be able to retrieve nutrition information for foods from the USDA Supertracker database (https:// ), w and then plot food(s) onto our two-dimensional plot based on fiber and ww.supertracker.usda.gov/foodapedia.aspx protein content per calorie to visualize foods for the user. The food name and calorie content should appear in text next to each food’s data point on the plot. Users should then be able to use the plot to compare similar foods or add foods to form a meal using foods from the database mentioned above. An example of individual foods being plotted as well as how they combine to form a meal is shown to the right. Once a meal is plotted, the software should be able to suggest substitutions of foods similar to those included in the meal that can make the meal more beneficial for weight management by bringing the meal closer to the predefined target. We would also like the software to allow plotting of foods via manual entry of nutrition information. Allowing users to save favorite meals, track their diet over time, and be able to submit diet history to a research database are additional features we would like included. The system is based on relatively simple math, but requires a very user-friendly UI to ensure it is simple to use by general consumers. For example, it should be relatively quick for a dietitian to use the software in a counseling session to plot the foods a client has consumed in a day and show which foods are pro what substitutions could be made to improve those problematic foods.

Contact: Prof Nakamura, Manabu T mtnakamu@illinois.edu

We have a project that requires Audio Puzzle Game which is a puzzle game that presents a player with segments of overlapping audio waveforms and asks the player to re-arrange and align these segments, by shifting the segments in a roughly similar manner to Jigsaw Puzzle. The input is a page or several paragraphs of audio recordings chopped into overlapping segments. The audio segments are then scrambled like Jigsaw Puzzle. The re-arrangement and alignment are assisted by a transcription of each audio segment. This transcription step is required.


The transcription is performed by each player by clicking on each segment to listen to audio corresponding to the segment and then entering his or her transcription in text. The text is displayed beneath each audio segment that has been transcribed. An algorithm will ensure the correctness of the arrangement (much like what is done in Jigsaw Puzzle when the pieces snap together). To ensure the correctness of the transcription, each player (and each team) can vote on other player’s (and other competitor team’s) transcription.


The output of this game is a “human computation” transcription of the audio recording.


A multiplayer mode is desired in which people can compete against each other from their mobile devices. In an individual competition mode, each player will be presented with the same puzzle, and whoever can solve it first will win that round of Audio Puzzle. In a team competition mode, each player of one team will be presented with different audio segments that together constitute the whole audio recording and the task is for the team to snap all these segments together collaboratively (and exchanging transcriptions of already translated segments to speed this up) and provide the transcription before any other team does it. During a play, an optional chat window will allow a player can collaborate with other players (this chat window is an optional feature—if not implemented the Audio Puzzle Game should still be able to run OK). The audio waveform can be color-coded (e.g., by frequencies) to aid rearrangement. A transcribed text below combined segments can be color-coded showing the degree of transcription agreement.


For starters, an audio track will be chopped into overlapping segments manually using Audacity and their waveform pictures captured and chopped manually using Screen Capture. If time permits, a simple pseudorandom chopper routine of both audio waveform and its picture could be implemented—but this is not a hard requirement. Also, audio track ripping from YouTube and song track ripping are possible if time permits. This is optional and not intended to add to the complexity of this software engineering task.


This Audio Puzzle Game can be implemented either as an iOS app, an Android app, or a simple desktop program.


More information can be found in this document

Contact: Alex Yahja; alexy at illinois.edu

We are working on a project that needs a special kind of social networking: an object-sharing social networking where participants can exchange and operate with “useful objects” rather than just texts, photos, emails and videos. “Objects” here is defined as a virtual, physically realistic (this is an ideal—a rough model is sufficient for this work) model of real-life objects such as shoes, toys, lamps, vehicles, clothes, persons, food, flowers, pets, drinks, flashlights, holistic travel plans, medicine, etc. All these objects have their physical properties, pictures, 3D model (when possible and if time permits), and behavior defined. The application of this social networking is for virtual in-depth discussion of things related to e-commerce and consumer care. Currently if you buy things from Amazon.com you can only read the textual description and reviews of the product. Ideally if you buy a wireless speaker, for example, you should be able to try out its virtual model to see how it would perform. To limit the number and complexity of objects, we will only have 3D geometric shapes such as cubes, blocks, cylinders, and spheres as the requirement. If time permits, we could use PhysX http://www.geforce.com/hardware/technology/physx and APEX https://developer.nvidia.com/apex for the creation of 3D models but this is not the requirement.

The social network itself is object-oriented as people will be modeled as virtual persons with components representing (rough approximation is fine) physical body, face, clothing, and intellectual capital (e.g., publications, research interest, and education background). These components are to be as simple as possible—if it is too complex, simplify or simply ignore the difficult component. The relationships between people will also be modeled as “relationship objects” where, for example, “friendship relationship” is derived from past activities and sharing of similar objects (including abstract objects or ideas) and behaviors (an ad-hoc, simple heuristic to compute “friendship relationship” is sufficient). Other relationships might include “seller-consumer”, “teacher-student”, “mechanics-carOwner”, “lawyer-client”, “advisor-advisee”, “fashionista-fans”, “doctor-patient”, etc.—this list is just an example, you are free to define any relationships (simple routine is suffice). The social network will have dynamic mechanisms for sending and sharing of objects with a GUI/virtual/game environment for the objects to display their behaviors. For starters, this GUI environment can be PhysX Visual Debugger https://developer.nvidia.com/physx-visual-debugger. If time permits and it is not too complicated, Unreal Game Engine http://www.unrealengine.com/en/udk/licensing/ can be used, but this is not a requirement.

Contact: Alex Yahja; alexy at illinois.edu

The Windows-based software package MAMBO (Modeling and Analysis of MultiBOdy systems) (available for free download at http://danko.mechanical.illinois.edu/Mambo) was developed between 1998 and 2003 at KTH Royal Institute of Technology in Stockholm, Sweden, and at Virginia Polytechnic Institute and State University, with the purpose of allowing users to model and analyze the motion of multibody mechanical systems. Its use is documented in the 2004 Springer textbook "Multibody Mechanics and Visualization" by Harry Dankowicz and has constituted an essential component of courses in mechanics offered at the above-mentioned institutions as well as at the University of Illinois. The package and the textbook will be used again in Spring 2013 in ME 440 "Kinematics and Dynamics of Mechanical Systems." The application is written in C++ and makes extensive use of Microsoft Foundation Classes and OpenGL. The companion package "The MAMBO Toolbox" is available for the Maple, Mathematica, and MuPAD/MATLAB computer-algebra platforms, and provides a suite of functionality for developing simulation and animation projects from scratch.


The proposed project may take one of several paths. There is great value in porting the MAMBO application to alternative operating systems, including MacOS, iOS, and/or Android. There is also significant room for improvement in the existing Windows-based implementation, not least of which include updating the application to take advantage of changes to the basic operating system since Windows XP.


Project Customer:
Prof. Harry Dankowicz of the Department of Mechanical Science and Engineering led the development effort of the original software package and would act as the customer for this project. The target audience, however, are undergraduate and graduate engineering students. In particular, there is opportunity to deploy alpha and beta versions of the package as soon as in the Spring 2013 iteration of ME 440 (25 registered students).

Prof. Harry Dankowicz; danko at illinois.edu; (217) 244-1231