Model-Oriented Programming – Umple.org

Model-Oriented Programming – Umple.org

Umple is a modeling tool and programming language family to enable what we call Model-Oriented Programming. It adds abstractions such as Associations, Attributes and State Machines derived from UML to object-oriented programming languages such as Java, PHP and Ruby. Umple can also be used to create UML class diagrams textually.

Umple is an open source project, so details will evolve. However, it is ready to be used for real systems. In fact the Umple compiler itself was written in Umple. Any Java or PHP project could use Umple. We have found the resulting code to be more readable and have many fewer lines. This is because Umple means you can avoid having to code a lot of ‘boilerplate’ code that would be needed to implement associations and attributes, a system based on Umple should also be less bug-prone.

Umple has also been found to help students learn UML faster in the classroom.

 

Resources

Want to try Umple out? Click on UmpleOnline to experiment with the language and generate either Java or PHP code (works directly in the browser).

To explore Umple, browse the user manual

New to Umple? Read the tutorial powerpoint presentation on Umple to obtain an overview.

Want to contribute or learn Umple in more depth? Go to the Google Code site where we are maintaining Umple as an open source project.

The Google code site hosts our Wiki, which has examples, tutorials and documentation.

Of key interest in the Wiki is a list of presentations and other tutorials about Umple

If you are a researcher or want to learn about Umple at a deep level, a list of peer reviewed papers and these can be found in our list of Umple publications.

Prof Lethbridge regularly blogs about Umple

There is a Google group (mailing list) you can join to be notified about Umple news.

The trunk of the Umple code tree is here.

Umple uses CruiseControl for automatic building. Here is the current build status.

Umple development uses test driven development. Here is the report of testing of the latest build.

Umple’s current bug tracking system is here.

from: http://cruise.eecs.uottawa.ca/umple/

 

Performance test, stress test – open source

Allmon

Description:

The main goal of the project is to create a distributed generic system collecting and storing various runtime metrics collections used for continuous system performance, health, quality and availability monitoring purposes. Allmon agents are designed to harvest a range of metrics values coming from many areas of monitored infrastructure (application instrumentation, JMX, HTTP health checks, SNMP). Collected data are base for quantitative and qualitative performance and availability analysis. Allmon collaborates with other analytical tools for OLAP analysis and Data Mining processing.

Requirement:

Platform independent

 

Apache JMeter

Description:

Apache JMeter is a 100% pure Java desktop application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions. Apache JMeter may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.

Requirement:

Solaris, Linux, Windows (98, NT, 2000). JDK1.4 (or higher).

 

benerator

Description:

benerator is a framework for creating realistic and valid high-volume test data, used for (unit/integration/load) testing and showcase setup. Metadata constraints are imported from systems and/or configuration files. Data can be imported from and exported to files and systems, anonymized or generated from scratch. Domain packages provide reusable generators for creating domain-specific data as names and addresses internationalizable in language and region. It is strongly customizable with plugins and configuration options.

Requirement:

Platform Independent

 

CLIF is a Load Injection Framework

Description:

CLIF is a modular and flexible distributed load testing platform. It may address any target system that is reachable from a Java program (HTTP, DNS, TCP/IP…) CLIF provides 3 user interfaces (Swing or Eclipse GUI, command line) to deploy, control and monitor a set of distributed load injectors and resource consumption probes (CPU, memory…) An Eclipse wizard helps programming support for new protocols. Load scenarios are defined through XML-editing, using a GUI, or using a capture tool. The scenario execution engine allows the execution of up to millions of virtual users per load injector.

Requirement:

Java 1.5 or greater, with enhanced support for Linux, Windows XP, MacOSX/PPC

 

ContiPerf

Description:

ContiPerf is a lightweight testing utility that enables the user to easily leverage JUnit 4 test cases as performance tests e.g. for continuous performance testing. It is inspired by JUnit 4′s easy test configuration with annotations and by JUnitPerf’s idea of wrapping Unit tests for performance testing, but more powerful and easier to use.

Requirement:

Windows, Mac OSX, Linux, Solaris and all other platforms that support Java 5

 

curl-loader

Description:

A C-written web application testing and load generating tool. The goal of the project is to provide a powerful open-source alternative to Spirent Avalanche and IXIA IxLoad. The loader uses real HTTP, FTP and TLS/SSL protocol stacks, simulating tens of thousand and hundred users/clients each with own IP-address. The tool supports user authentication, login and a range of statistics.

Requirement:

linux

 

D-ITG

Description:

D-ITG (Distributed Internet Traffic Generator) is a platform capable to produce traffic at packet level accurately replicating appropriate stochastic processes for both IDT (Inter Departure Time) and PS (Packet Size) random variables.

Requirement:

Linux, Windows

 

Database Opensource Test Suite

Description:

The Database Opensource Test Suite (DOTS) is a set of test cases designed for the purpose of stress-testing database server systems in order to measure database server performance and reliability.

Requirement:

Linux, POSIX

 

DBMonster

Description:

DBMonster is an application to generate random data for testing SQL database driven applications under heavy load.

Requirement:

OS Independent

 

Deluge

Description:

An open-source web site stress test tool. Simulates multiple user types and counts. Includes proxy server for recording playback scripts, and log evaluator for generating result statistics. Note: this tool is no longer under active development although it is still available on Sourceforge. BEWARE: This tool has not been updated since 2002. It remains listed here in case anybody wishes to take it over.

Requirement:

OS independent

 

Dieseltest

Description:

Dieseltest is a Windows application that simulates hundreds or thousands of users hitting a website. BEWARE: This tool has not been updated since 2001. It remains listed here in case anybody wishes to take it over.

Requirement:

Windows

 

Faban

Description:

Faban is a facility for developing and running benchmarks, developed by Sun. It has two major components, the Faban harness and the Faban driver framework. The Faban harness is a harness to automate running of server benchmarks as well as a container to host benchmarks allowing new benchmarks to be deployed in a rapid manner. Faban provides a web interface to launch & queue runs, and extensive functionality to view, compare and graph run outputs.

Requirement:

OS independent; JVM 1.5 or later.

 

FunkLoad

Description:

FunkLoad is a functional and load web tester, written in Python, whose main use cases are functional and regression testing of web projects, performance testing by loading the web application and monitoring your servers, load testing to expose bugs that do not surface in cursory testing, and stress testing to overwhelm the web application resources and test the application recoverability, and writing web agents by scripting any web repetitive task, like checking if a site is alive.

Requirement:

OS independent – except for the monitoring which is Linux specific.

 

FWPTT load testing web applications

Description:

fwptt is an open source Web application testing program for load testing web applications. It can record normal and AJAX requests. It has been tested on ASP.Net applications, but it should work with JSP, PHP or other.

Requirement:

windows

 

Grinder

Description:

The Grinder is a Java load-testing framework making it easy to orchestrate the activities of a test script in many processes across many machines, using a graphical console application.

Requirement:

OS Independent

 

GrinderStone

Description:

GrinderStone is an Eclipse plug-in for Grinder load testing scripts development including debugging, modularity and pretty logging

Requirement:

All

 

Hammerhead 2 – Web Testing Tool

Description:

Hammerhead 2 is a stress testing tool designed to test out your web server and web site. It can initiate multiple connections from IP aliases and simulated numerous (256+) users at any given time. The rate at which Hammerhead 2 attempts to pound your site is fully configurable, there are numerous other options for trying to create problems with a web site (so you can fix them).

Requirement:

Hammerhead has been used with Linux, Solaris and FreeBSD.

 

Hammerora

Description:

Hammerora is a load generation tool for the Oracle Database and Web Applications. Hammerora includes pre-built schema creation and load tests based on the industry standard TPC-C and TPC-H benchmarks to deploy against the Oracle database with multiple users. Hammerora also converts and replays Oracle trace files and enables Web-tier testing to build bespoke load tests for your entire Oracle application environment.

Requirement:

Platform Independent (Binaries for Linux and Windows)

 

httperf

Description:

Httperf is a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance. The focus is not on implementing one particular benchmark but on providing a robust, high-performance tool that facilitates the construction of both micro and macro level benchmarks. The three distinguishing characteristics of httperf are its robustness, which includes the ability to generate and sustain server overload, support for the HTTP/1.1 and SSL protocols, and its extensibility.

Requirement:

linux (Debian package available), HP-UX, perhaps other Unix

 

http_load

Description:

http_load runs multiple HTTP fetches in parallel, to test the throughput of a Web server. However, unlike most such test clients, it runs in a single process, to avoid bogging the client machine down. It can also be configured to do HTTPS fetches.

Requirement:

tbc

 

Iperf

Description:

Iperf was developed by NLANR/DAST as a modern alternative for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss.

Requirement:

Platform Independent

 

IxoraRMS

Description:

Monitoring tool with great visualization and customization capabilities. It’s quick to install and suitable for use in performance labs.

Requirement:

Windows, Unix

 

j-hawk

Description:

j-hawk is a Java based open source framework which can be incorporated in your application for performance testing. The idea is you have to define *module and its tasks (means method) inside your application and register the same with j-Hawk. j-Hawk executes the modules and generates a graphical performance report which can be analyzed to find performance bottleneck of your application.

Requirement:

windows,ubuntu

 

JChav

Description:

JChav is a way to see the change in performance of your web application over time, by running a benchmark test for each build you produce. JChav reads all the JMeter logs from each of your runs (one per build), and produces a set of charts for each test in each run.

Requirement:

JMeter

 

JCrawler

Description:

Stress-Testing Tool for web-applications. It comes with the crawling/exploratory feature. You can give JCrawler a set of starting URLs and it will begin crawling from that point onwards, going through any URLs it can find on its way and generating load on the web application. The load parameters (hits/sec) are configurable.

Requirement:

OS Independent

 

loadUI

Description:

loadUI is a tool for Load Testing numerous protocols, such as Web Services, REST, AMF, JMS, JDBC as well as Web Sites. Tests can be distributed to any number of runners and be modified in real time. LoadUI is tightly integrated with soapUI. LoadUI uses a highly graphic interface making Load Testing Fun and Fast.

Requirement:

Any

 

Lobo, Continuous Tuning

Description:

Lobo is a tool for performance testing and monitoring that allows you to monitor the evolution of performance along the time-line of the project. It was specially designed to be used in agile-iterative and evolutionary approaches.

Requirement:

Java

 

MessAdmin

Description:

MessAdmin is a light-weight and non-intrusive notification system and HttpSession administration for J2EE Web Applications, giving detailed statistics and informations on the application. It installs as a plug-in to any Java EE WebApp, and requires zero-code modification.

Requirement:

OS Independant

 

mstone

Description:

Mstone started as a mail performance measurement system but now can test svn, etc. It can simultaneously test SMTP, POP, IMAP, and some HTML based systems. It measures transaction latency in multiple stages, and graphs the combined results from multiple clients.

Requirement:

multiple (perl based)

 

Multi-Mechanize

Description:

Multi-Mechanize is an open source framework for web performance and load testing. It allows you to run simultaneous python scripts to generate load (synthetic transactions) against a web site or web service.

Requirement:

Any

 

nGrinder

Description:

nGrinder is the platform of stress tests which enables you to execute script creation, test execution, monitoring, and result report generator simultaneously. The opensource nGrinder offers easy ways to conduct stress tests by eliminating inconveniences and providing integrated environments.

Requirement:

Windows / Linux / Mac

 

NTime

Description:

The NTime tool is very similar to NUnit tool to perform repeatable tasks that help managers, architects, developers and testers to test an application against its performance.

Requirement:

Windows 98 or above, .Net framework 1.1 or 2.0

 

OpenSTA

Description:

A distributed software testing architecture based on CORBA. Using OpenSTA (Open System Testing Architecture) a user can generate realistic heavy loads simulating the activity of hundreds to thousands of virtual users. OpenSTA graphs both virtual user response times and resource utilization information from all Web Servers, Application Servers, Database Servers and Operating Platforms under test, so that precise performance measurements can be gathered during load tests and analysis on these measurements can be performed.

Requirement:

Windows 2000, NT4 and XP

 

OpenWebLoad

Description:

OpenWebLoad is a tool for load testing web applications. It aims to be easy to use and providing near real-time performance measurements of the application under test.

Requirement:

Linux, Windows

 

Ostinato

Description:

Ostinato is an open-source, cross-platform packet/traffic generator and analyzer with a friendly GUI. It aims to be “Wireshark in Reverse” and thus become complementary to Wireshark.

Requirement:

Cross-Platform

 

p-unit

Description:

An open source framework for unit test and performance benchmark, which was initiated by Andrew Zhang, under GPL license. p-unit supports to run the same tests with single thread or multi-threads, tracks memory and time consumption, and generates the result in the form of plain text, image or pdf file.

Requirement:

OS Independent

 

PandoraFMS

Description:

Pandora FMS is a monitoring Open Source software. It watches your systems and applications, and allows you to know the status of any element of those systems. Pandora FMS could detect a network interface down, a defacement in your website, a memory leak in one of your server application, or the movement of any value of the NASDAQ new technology market. If you want, Pandora FMS could send out SMS message when your systems fails… or when Google’s value drop below US$ 500.

Requirement:

32-bit MS Windows (NT/2000/XP), All POSIX (Linux/BSD/UNIX-like OSes), Solaris, HP-UX, IBM AIX

 

postal

Description:

SMTP benchmarking tool. It is threaded, uses very little disk I/O (e-mail body content randomly generate text). It has an SMTP source, SMTP sink and POP server load tester (to pull sent mail)

Requirement:

Linux/UNIX; requires C compiler

 

Pylot

Description:

Pylot is a free open source tool for testing performance and scalability of web services. It runs HTTP load tests, which are useful for capacity planning, benchmarking, analysis, and system tuning. Pylot generates concurrent load (HTTP Requests), verifies server responses, and produces reports with metrics. Tests suites are executed and monitored from a GUI.

Requirement:

Python 2.5+. required.Tested on Windows XP, Vista, Cygwin, Ubuntu, MacOS

 

Raw Load Tester

Description:

This application calls the URL you select as many times as you choose and tells you how long it took the server to respond. It writes some additional runtime details to the PHP log file so you can optionally do more granular analysis afterwards. Although the server processes most of the statistics, all URL requests come from the browser. You can run as many browsers and workstations simultaneously as you want.

Requirement:

PHP/JavaScript

 

Seagull

Description:

Seagull is a multi-protocol traffic generator test tool. Primary aimed at IMS protocols, Seagull is a powerful traffic generator for functional, load, endurance, stress and performance tests for almost any kind of protocol. Currently supports Diameter, XCAP over HTTP, TCAP (GSM Camel, MAP, Win) protocols.

Requirement:

Linux/Unix/Win32-Cygwin

 

Siege

Description:

SIEGE is an http regression testing and benchmarking utility. It was designed to let web developers measure the performance of their code under duress, to see how it will stand up to load on the internet. It lets the user hit a webserver with a configurable number of concurrent simulated users. Those users place the webserver “under siege.” SCOUT surveys a webserver and prepares the urls.txt file for a siege. In order to perform regression testing, siege loads URLs from a file and runs through them sequentially or randomly. Scout makes the process of populating that file easier. You should send out the scout, before you lay siege.

Requirement:

GNU/Linux, AIX, BSD, HP-UX and Solaris.

 

Sipp

Description:

SIPp is a performance testing tool for the SIP protocol. Its main features are basic SIPStone scenarios, TCP/UDP transport, customizable (xml based) scenarios, dynamic adjustement of call-rate and a comprehensive set of real-time statistics. It can also generate media (RTP) traffic for audio and video calls.

Requirement:

Linux/Unix/Win32-Cygwin

 

SLAMD

Description:

SLAMD Distributed Load Generation Engine is a Java-based application designed for stress testing and performance analysis of network-based applications.

Requirement:

Any system with Java 1.4 or higher

 

Soap-Stone

Description:

Network benchmark application which can put your network under load and conduct automatic benchmark and recording activities.

Requirement:

OS Independent

 

stress_driver

Description:

General-purpose stress test tool.

Requirement:

Windows NT/2000, Linux

 

TestMaker

Description:

TestMaker from PushToTest.com delivers a rich environment for building and running intelligent test agents that test Web-enabled applications for scalability, functionality, and performance. It comes with a friendly graphical user environment, an object-oriented scripting language (Jython) to build intelligent test agents, an extensible library of protocol handlers (HTTP, HTTPS, SOAP, XML-RPC, SMTP, POP3, IMAP), a new agent wizard featuring an Agent Recorder to write scripts for you, a library of fully-functional sample test agents, and shell scripts to run test agents from the command line and from unit test utilities.

Requirement:

Windows, Linux, Solaris, and Macintosh

 

TPTEST

Description:

The purpose with TPTEST is to allow users to measure the speed of their Internet connection in a simple way. TPTEST measures the throughput speed to and from various reference servers on the Internet. The use of TPTEST may help increase the consumer/end user knowledge of how Internet services work.

Requirement:

MacOS/Carbon and Win32

 

Tsung

Description:

Tsung is a distributed load testing tool. It is protocol-independent and can currently be used to stress HTTP, SOAP and Jabber servers (SSL is supported). It simulates complex user’s behaviour using an XML description file, reports many measurements in real time (including response times, CPU and memory usage from servers, customized transactions, etc.). HTML reports (with graphics) can be generated during the load. For HTTP, it supports 1.0 and 1.1, has a proxy mode to record sessions, supports GET and POST methods, Cookies, and Basic WWW-authentication. It has already been used to simulate thousands of virtual users.

Requirement:

Tested on Linux, but should work on MacOSX and Windows.

 

Valgrind

Description:

Valgrind is an award-winning suite of tools for debugging and profiling Linux programs. With the tools that come with Valgrind, you can automatically detect many memory management and threading bugs, avoiding hours of frustrating bug-hunting, making your programs more stable. You can also perform detailed profiling, to speed up and reduce memory use of your programs.

Requirement:

Linux

 

Web Application Load Simulator

Description:

LoadSim is a web application load simulator. It allows you to create simulations and have those simulations run against your webserver.

Requirement:

JDK 1.3 or above

 

Web Polygraph

Description:

Benchmarking tool for caching proxies, origin server accelerators, L4/7 switches, content filters, and other Web intermediaries.

Requirement:

C++ compiler

 

WebLOAD

Description:

WebLOAD Open Source is a fully functional, commercial-grade performance testing product based on WebLOAD, Radview’s flagship product that is already deployed at 1,600 sites. Available for free download and use, WebLOAD is a commercial-grade open source project with more than 250 engineering years of product development. Companies that require commercial support, additional productivity features and compatibility with third-party protocols have the option of purchasing WebLOAD Professional directly from RadView.

Requirement:

Windows NT/2000/XP

 

Very good list From:

ISOC – Leading Global Standards Organizations Endorse ‘OpenStand’ Principles that Drive Innovation and Borderless Commerce

[PISCATAWAY, N.J., and WASHINGTON, D.C., United States; GENEVA, Switzerland, and http://www.w3.org/ —29 August 2012]— Five leading global organizations—IEEE, Internet Architecture Board (IAB), Internet Engineering Task Force (IETF), Internet Society and World Wide Web Consortium (W3C)—today announced that they have signed a statement affirming the importance of a jointly developed set of principles establishing a modern paradigm for global, open standards. The shared “OpenStand” principles—based on the effective and efficient standardization processes that have made the Internet and Web the premiere platforms for innovation and borderless commerce—are proven in their ability to foster competition and cooperation, support innovation and interoperability and drive market success.

IEEE, IAB, IETF, Internet Society and W3C invite other standards organizations, governments, corporations and technology innovators globally to endorse the principles, which are available at open-stand.org.

The OpenStand principles strive to encapsulate that successful standardization model and make it extendable across the contemporary, global economy’s gamut of technology spaces and markets. The principles comprise a modern paradigm in which the economics of global markets—fueled by technological innovation—drive global deployment of standards, regardless of their formal status within traditional bodies of national representation. The OpenStand principles demand:

• cooperation among standards organizations;

• adherence to due process, broad consensus, transparency, balance and openness in standards development;

• commitment to technical merit, interoperability, competition, innovation and benefit to humanity;

• availability of standards to all, and

• voluntary adoption.

“New dynamics and pressures on global industry have driven changes in the ways that standards are developed and adopted around the world,” said Steve Mills, president of the IEEE Standards Association. “Increasing globalization of markets, the rapid advancement of technology and intensifying time-to-market demands have forced industry to seek more efficient ways to define the global standards that help expand global markets. The OpenStand principles foster the more efficient international standardization paradigm that the world needs.”

Added Leslie Daigle, chief Internet technology officer with the Internet Society: “International standards development for borderless economics is not ad hoc; rather, it has a paradigm—one that has demonstrated agility and is driven by technical merit. The OpenStand principles convey the power of bottom-up collaboration in harnessing global creativity and expertise to the standards of any technology space that will underpin the modern economy moving forward.”

Standards developed and adopted via the OpenStand principles include IEEE standards for the Internet’s physical connectivity, IETF standards for end-to-end global Internet interoperability and the W3C standards for the World Wide Web.

“The Internet and World Wide Web have fueled an economic and social transformation, touching billions of lives. Efficient standardization of so many technologies has been key to the success of the global Internet,” said Russ Housley, IETF chair. “These global standards were developed with a focus toward technical excellence and deployed through collaboration of many participants from all around the world. The results have literally changed the world, surpassing anything that has ever been achieved through any other standards-development model.”

Globally adopted design-automation standards, which have paved the way for a giant leap forward in industry’s ability to define complex electronic solutions, provide another example of standards developed in the spirit of the OpenStand principles. Another technology space that figures to demand such standards over the next decades is the global smart-grid effort, which seeks to augment regional facilities for electricity generation, distribution, delivery and consumption with a two-way, end-to-end network for communications and control.

“Think about all that the Internet and Web have enabled over the past 30 years, completely transforming society, government and commerce,” said W3C chief executive officer Jeff Jaffe. “It is remarkable that a small number of organizations following a small number of principles have had such a huge impact on humanity, innovation and competition in global markets.”

Bernard Aboba, chair of the IAB: “The Internet has been built on specifications adopted voluntarily across the globe. By valuing running code, interoperability and deployment above formal status, the Internet has democratized the development of standards, enabling specifications originally developed outside of standards organizations to gain recognition based on their technical merit and adoption, contributing to the creation of global communities benefiting humanity. We now invite standards organizations, as well as governments, companies and individuals to join us at open-stand.org in order to affirm the principles that have nurtured the Internet and underpin many other important standards—and will continue to do so.”

About IEEE

IEEE, a large, global technical professional organization is dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice on a wide variety of areas ranging from aerospace systems, computers and telecommunications to biomedical engineering, electric power and consumer electronics. Learn more at http://www.ieee.org.

About the Internet Architecture Board (IAB)

The IAB is chartered both as a committee of the Internet Engineering Task Force (IETF) and as an advisory body of the Internet Society (ISOC). Its responsibilities include architectural oversight of IETF activities, Internet Standards Process oversight and appeal, and the appointment of the RFC Editor. The IAB is also responsible for the management of the IETF protocol parameter registries.

About the Internet Engineering Task Force

The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is open to any interested individual. The IETF is an organised activity of the Internet Society.

About the Internet Society

The Internet Society is the trusted independent source for Internet information and thought leadership from around the world. With its principled vision and substantial technological foundation, the Internet Society promotes open dialogue on Internet policy, technology, and future development among users, companies, governments, and other organizations. Working with its members and Chapters around the world, the Internet Society enables the continued evolution and growth of the Internet for everyone. For more information, visit www.internetsociety.org.

About the World Wide Web Consortium (W3C)

The World Wide Web Consortium (W3C) is an international consortium where Member organizations, a full-time staff, and the public work together to develop Web standards. W3C primarily pursues its mission through the creation of Web standards and guidelines designed to ensure long-term growth for the Web. Over 375 organizations are Members of the Consortium. W3C is jointly run by the MIT Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) in the USA, the European Research Consortium for Informatics and Mathematics (ERCIM) headquartered in France and Keio University in Japan, and has additional Offices worldwide. For more information see http://www.w3.org/.

 

From: isoc.org

Android App Inventor Beta Preview

The Massachusetts Institute of Technology has opened its web-based App Inventor service for testing by the general public. MIT originally acquired the App Inventor source code from Google when the company terminated the project in December 2011.

App Inventor allows users with minimal programming knowledge to create Android applications through a drag-and-drop interface. The MIT Center for Mobile Learning, which was created to develop the application and educational materials around it, says that App Inventor is now “suitable for any use, including running classes” but that users should be aware of the fact that it has not been tested on its servers under production load yet. Users are advised to maintain backup copies of applications developed with the service at all times.

According to the release announcement, a first stable release of the software should be expected within the first quarter of this year. The Center is currently working on improving the general performance of the service and has said it will also concentrate on developing resources to make the programme more useful as a learning tool.

The App Inventor source code is available from the project’s Google Code site and is published under the Apache Licence 2.0. Ready compiled versions for Windows, Mac OS X and Linux can also be downloaded from the site.

 

From:h-online

2010 Turing Lecture delivered at FCRC’11, June 5, 2011, in San Jose, CA, USA

2010 Turing Lecture delivered at FCRC’11, June 5, 2011, in San Jose, CA, USA

Full Citation

Over the past 30 years, Leslie G. Valiant has made fundamental contributions to many aspects of theoretical computer science. His work has opened new frontiers, introduced ingenious new concepts, and presented results of great originality, depth, and beauty. Time and again, Valiant’s work has literally defined or transformed the computer science research landscape.

Valiant’s greatest single contribution may be his 1984 paper “A Theory of the Learnable,” which laid the foundations of computational learning theory. He introduced a general framework as well as concrete computational models for studying the learning process, including the famous “probably approximately correct” (PAC) model of machine learning. This has developed into a vibrant research area and has had enormous influence on machine learning, artificial intelligence, and many areas of computing practice, such as natural language processing, handwriting recognition, and computer vision.

Valiant has made many seminal contributions to computational complexity. He introduced the notion of complexity of enumeration, in terms of the complexity class #P. The most surprising consequence of this study was that natural enumeration problems can be intractable even when the corresponding decision problem is tractable. Another fundamental contribution to computational complexity was Valiant’s theory of algebraic computation, in which he established a framework for understanding which algebraic formulas can be evaluated efficiently.

A third broad area in which Valiant has made important contributions is the theory of parallel and distributed computing. His design of randomized routing strategies laid the groundwork for a rich body of research that exposed how randomization can be used to offset congestion effects in communication networks. He proposed the bulk synchronous model of parallel computation. He also posed a number of influential challenges leading to the construction of parallel algorithms for seemingly inherently sequential problems. Finally, the superconcentrators constructed by Valiant in the context of computational complexity established the fundamental role of expander graphs in computation.

From: ACM

Civic Commons – Let’s Transform Governments With Tech and Innovation

(and Save Millions of Dollars, Too)

Government entities at all levels face substantial and similar IT challenges, but today, each must take them on independently. Why can’t they share their technology, eliminating redundancy, fostering innovation, and cutting costs? We think they can.

 

Civic Commons – Overview from Civic Commons on Vimeo.

From:Civic Commons

How to Create a .deb package from source files

Assuming that your build from source is successful, you can make a Debian (Ubuntu) package (.deb):

First, install checkinstall:

> sudo apt-get install checkinstall

Rebuild the package using checkinstall:

> cd /path/to/extracted/package
> ./configure
> sudo make
> sudo checkinstall

It’s done! Get the resulting “.deb” file for future use.

It can later be installed using:

> sudo dpkg -i packagename.deb

You can remove it from your system using:

> sudo dpkg -r packagename.deb

Some packages require additional dependencies and optional parameters to be specified in order to build them successfully.

Hacking the Code of the Mind

TAU team connects neurons to computers to decipher the enigmatic code of neuronal circuits

A neuronal circuit engineered in the TAU lab.
A neuronal circuit engineered in the TAU lab.

Machine logic is based on human logic. But although a computer processor can be dissembled and dissected in logical steps, the same is not true for the way our brains process information, says Mark Shein of Tel Aviv University‘s School of Electrical Engineering.

Doctoral student Shein and his supervisors, Prof. Yael Hanein of the School of Electrical Engineering andProf. Eshel Ben-Jacob of the School of Physics and Astronomy, want to understand the brain’s logic. They have developed a new kind of a lab-on-a-chip platform that may help neuroscientists understand one of the deepest mysteries of our brain — how neuronal networks communicate and work together. The chip was recently described in an issue of the journal PLoS ONE.

Within it, Shein has applied advanced mathematical and engineering techniques to connect neurons with electronics and understand how neuronal networks communicate. Hoping to answer ultimate questions about how our neuronal circuits work, the researchers believe their tool can be also used to test new drugs. It might also advance artificial intelligence and aid scientists in rewiring artificial limbs to our brain.

There are relatively simple neural “firing” patterns that can be measured with sensory organs like the ears or eyes, but researchers know little about deep thought processes. Could the brain’s electrical signals reveal the basis of thought itself?

“When we look at the neuronal networks operating in the ears or eyes, we have some idea about the coding schemes they utilize,” explains Shein. A researcher can apply a stimulus such as a bright light, for example, and then monitor responses in the eye’s neurons. But for more complex processes, like “thinking” or operating different sensory inputs and outputs together, “we are basically looking into a black box,” he says.

The brain is composed of a daunting number of circuits interconnected with other countless circuits, so understanding of how they function has been close to impossible. But using engineered brain tissue in a Petri dish, Shein’s device allows researchers to see what’s happening to well-defined neural circuits under different conditions. The result is an active circuitry of neurons on a man-made chip. With it they can look for patterns in bigger networks of neurons, to see if there are any basic elements for information coding.

Investigating the activity of single neurons is not enough to understand how a network functions. With nanotechnological systems and tools, now researchers can explore activity patterns of many neurons simultaneously. In particular, they can investigate how several groups of neurons communicate with each other, says Shein.

The hierarchy of the brain

With these network engineering techniques, the scientists cultured different sized networks of neuronal clusters. Once they looked at these groups, they found rich and surprising behaviors which could not be predicted from what scientists know about single neurons.

The researchers were also able to measure patterns from nerve activity, at nodes where a number of nerves converged into networks. What they detected appears to show that neural networks have a hierarchical structure — large networks are composed of smaller sub-networks. This observation, and a unique setup using electrodes and living nerves, allowed them to create hierarchical networks in a dish.

The brain’s circuits work like codes. They can see the patterns in the networks and simplify them, or control connectivity between cells to see how the neuronal network responds to various chemicals and conditions, the scientists report. One theory, proposed by Prof. Ben-Jacob, is that the human brain stores memories like a holograph of an image: small neural networks contain information about the whole brain, but only at a very low resolution.

So far the researchers are able to reveal that clusters of as few as 40 cells can serve as a minimal but sufficient functional network. This cluster is capable of sustaining neural network activity and communicating with other clusters. What this means exactly will be the next question.


For more neuroscience news from Tel Aviv University, click here.

Keep up with the latest AFTAU news on Twitter: http://www.twitter.com/AFTAUnews.

from: aftau.org

 

Air Power: New Device Captures Ambient Electromagnetic Energy to Drive Small Electronic Devices

Researchers have discovered a way to capture and harness energy transmitted by such sources as radio and television transmitters, cell phone networks and satellite communications systems.  By scavenging this ambient energy from the air around us, the technique could provide a new way to power networks of wireless sensors, microprocessors and communications chips.

Georgia Tech School of Electrical and Computer Engineering professor Manos Tentzeris displays an inkjet-printed rectifying antenna used to convert microwave energy to DC power. This grid was printed on flexible Kapton material and is expected to operate with frequencies as high as 10 gigahertz when complete. (Click image for high-resolution version. Credit: Gary Meek). 

“There is a large amount of electromagnetic energy all around us, but nobody has been able to tap into it,” saidManos Tentzeris, a professor in the Georgia Tech School of Electrical and Computer Engineering who is leading the research. “We are using an ultra-wideband antenna that lets us exploit a variety of signals in different frequency ranges, giving us greatly increased power-gathering capability.”

Tentzeris and his team are using inkjet printers to combine sensors, antennas and energy-scavenging capabilities on paper or flexible polymers. The resulting self-powered wireless sensors could be used for chemical, biological, heat and stress sensing for defense and industry; radio-frequency identification (RFID) tagging for manufacturing and shipping, and monitoring tasks in many fields including communications and power usage.

A presentation on this energy-scavenging technology was given July 6 at the IEEE Antennas and Propagation Symposium in Spokane, Wash.  The discovery is based on research supported by multiple sponsors, including the National Science Foundation, the Federal Highway Administration and Japan’s New Energy and Industrial Technology Development Organization (NEDO).

Communications devices transmit energy in many different frequency ranges, or bands.  The team’s scavenging devices can capture this energy, convert it from AC to DC, and then store it in capacitors and batteries. The scavenging technology can take advantage presently of frequencies from FM radio to radar, a range spanning 100 megahertz (MHz) to 15 gigahertz (GHz) or higher.

Georgia Tech School of Electrical and Computer Engineering professor Manos Tentzeris holds a sensor (left) and an ultra-broadband spiral antenna for wearable energy-scavenging applications. Both were printed on paper using inkjet technology. 

Scavenging experiments utilizing TV bands have already yielded power amounting to hundreds of microwatts, and multi-band systems are expected to generate one milliwatt or more. That amount of power is enough to operate many small electronic devices, including a variety of sensors and microprocessors.

And by combining energy-scavenging technology with super-capacitors and cycled operation, the Georgia Tech team expects to power devices requiring above 50 milliwatts.  In this approach, energy builds up in a battery-like supercapacitor and is utilized when the required power level is reached.

The researchers have already successfully operated a temperature sensor using electromagnetic energy captured from a television station that was half a kilometer distant.  They are preparing another demonstration in which a microprocessor-based microcontroller would be activated simply by holding it in the air.

Exploiting a range of electromagnetic bands increases the dependability of energy-scavenging devices, explained Tentzeris, who is also a faculty researcher in the Georgia Electronic Design Center at Georgia Tech.  If one frequency range fades temporarily due to usage variations, the system can still exploit other frequencies.

The scavenging device could be used by itself or in tandem with other generating technologies.  For example, scavenged energy could assist a solar element to charge a battery during the day.  At night, when solar cells don’t provide power, scavenged energy would continue to increase the battery charge or would prevent discharging.

Georgia Tech graduate student Rushi Vyasholds a prototype energy-scavenging device, while School of Electrical and Computer Engineering professor Manos Tentzeris displays a miniaturized flexible antenna that was inkjet-printed on paper and could be used for broadband energy scavenging. 

Utilizing ambient electromagnetic energy could also provide a form of system backup.  If a battery or a solar-collector/battery package failed completely, scavenged energy could allow the system to transmit a wireless distress signal while also potentially maintaining critical functionalities.

The researchers are utilizing inkjet technology to print these energy scavenging devices on paper or flexible paper-like polymers – a technique they already using to produce sensors and antennas. The result would be paper-based wireless sensors that are self-powered, low-cost and able to function independently almost anywhere.

To print electrical components and circuits, the Georgia Tech researchers use a standard materials inkjet printer.  However, they add what Tentzeris calls “a unique in-house recipe” containing silver nanoparticles and/or other nanoparticles in an emulsion.  This approach enables the team to print not only RF components and circuits, but also novel sensing devices based on such nanomaterials as carbon nanotubes.

When Tentzeris and his research group began inkjet printing of antennas in 2006, the paper-based circuits only functioned at frequencies of 100 or 200 MHz, recalled Rushi Vyas, a graduate student who is working with Tentzeris and graduate student Vasileios Lakafosis on several projects.

“We can now print circuits that are capable of functioning at up to 15 GHz — 60 GHz if we print on a polymer,” Vyas said. “So we have seen a frequency operation improvement of two orders of magnitude.”

The researchers believe that self powered, wireless paper-based sensors will soon be widely available at very low cost. The resulting proliferation of autonomous, inexpensive sensors could be used for applications that include:

· Airport security: Airports have both multiple security concerns and vast amounts of available ambient energy from radar and communications sources.  These dual factors make them a natural environment for large numbers of wireless sensors capable of detecting potential threats such as explosives or smuggled nuclear material.

· Energy savings: Self-powered wireless sensing devices placed throughout a home could provide continuous monitoring of temperature and humidity conditions, leading to highly significant savings on heating and air conditioning costs.  And unlike many of today’s sensing devices, environmentally friendly paper-based sensors would degrade quickly in landfills.

· Structural integrity: Paper or polymer based sensors could be placed throughout various types of structures to monitor stress.  Self powered sensors on buildings, bridges or aircraft could quietly watch for problems, perhaps for many years, and then transmit a signal when they detected an unusual condition.

· Food and perishable material storage and quality monitoring: Inexpensive sensors on foods could scan for chemicals that indicate spoilage and send out an early warning if they encountered problems.

· Wearable bio-monitoring devices: This emerging wireless technology could become widely used for autonomous observation of patient medical issues.

Research News & Publications Office
Georgia Institute of Technology
75 Fifth Street, N.W., Suite 314
Atlanta, Georgia  30308  USA

Media Relations Contacts: John Toon (404-894-6986)(jtoon@gatech.edu) or Abby
Robinson (404-385-3364)(abby@innovate.gatech.edu).

Writer: Rick Robinson

from:Georgia Tech

Release of DataCatalogs.org to map open data around the world

The following post is from Jonathan Gray, Community Coordinator at the Open Knowledge Foundation.

We’re very pleased to announce an alpha version of datacatalogs.org, a website to help keep track of open data catalogues from around the world. The project is being launched to coincide with our annual conference, OKCon 2011. You can see the site here:

http://datacatalogs.org

The project was borne out of an extremely useful workshop on data catalogue interoperability in Edinburgh earlier this year, and then with a few further online meetings. It is powered by the CKAN software, which also powers data.gov.uk and many other catalogues.

This is just the beginning of what we hope will become an invaluable resource for anyone interested in finding, using or having an overview of data catalogues from around the world. We have lots of ideas about improvements and features that we’d like to add. If you have anything you think we should prioritise, please let us know in comments below, or on the ckan-discuss list!

Below is a press release for the project (and here in Google Docs). If you know anyone who you think might be interested in this, we’d be most grateful for any help in passing it on!

PRESS RELEASE: Mapping open data around the world

BERLIN, 30th June 2011 – Today a broad coalition of stakeholders are launching DataCatalogs.org, a new project to keep track of open data initiatives around the world.

Governments are beginning to recognise that opening up public information can bring about a wide variety of social and economic benefits – such as increasing transparency and efficiency, creating jobs in the new digital economy, and enabling web and mobile developers to create new useful applications and services for citizens.

But it can be difficult to keep up with the pace of developments in this area. Following on from the success of initiatives like the Obama administration’s data.gov and the UK government’s data.gov.uk, nearly every week there is a new open data initiative from a local, regional or national government somewhere around the world – from Chicago to Torino, Morocco to Moldova.

A group of leading open data experts are helping to keep DataCatalogs.org updated, including representatives from international bodies such as the World Bank, independent bodies such as the W3C and the Sunlight Foundation, and numerous national governments.

Neil Fantom, Manager of the World Bank’s Development Data Group, says: “Open data is public good, but only if you can find it – we’re pleased to see initiatives such as DataCatalogs.org giving greater visibility to public information, allowing easier discovery of related content from different publishers and making open data more valuable for users.”

Beth Noveck, who ran President Obama’s open government programme and is now working with the UK Government says: “This project is a simple but important start to bringing together the community of key open data stakeholders. My hope is that DataCatalogs.org grows into a vibrant place to articulate priorities, find and mash up data across jurisdictions and curate data-driven tools and initiatives that improve the effectiveness of government and the lives of citizens.”

Cathrine Lippert, of the Danish National IT and Telecom Agency says: “DataCatalogs.org is a brilliant guide to keeping track of all the data that is being opened up around the world. In addition to our own national data catalogue, we can now point data re-users to DataCatalogs.org to locate data resources abroad.”

Andrew Stott, former Director of Digital Engagement at the UK’s Cabinet Office says: “This initiative will not only help data users find data in different jurisdictions but also help those implementing data catalogues to find good practice to emulate elsewhere in the world.”

Notes for editors

The Open Knowledge Foundation (okfn.org) is a not-for-profit organisation founded in 2004. It has played a significant role in supporting open data around the world, particularly in Europe, and helps to run the UK’s national data catalogue, data.gov.uk.

DataCatalogs.org is being launched at the Open Knowledge Foundation’s annual conference, OKCon 2011 (okcon.org) which brings together developers, designers, civil servants, journalists and NGOS for a week of planning, coding and talks.

For further details please contact Jonathan Gray, Community Coordinator at the Open Knowledge Foundation on jonathan.gray@okfn.org.

Restoring Memory, Repairing Damaged Brains

USC Viterbi School of Engineering scientists have developed a way to turn memories on and off—literally with the flip of a switch.
Using an electronic system that duplicates the neural signals associated with memory, they managed to replicate the brain function in rats associated with long-term learned behavior, even when the rats had been drugged to forget.

Theodore Berger
“Flip the switch on, and the rats remember. Flip it off, and the rats forget,” said Theodore Berger of the USC Viterbi School of Engineering’s Department of Biomedical Engineering.

Berger is the lead author of an article that will be published in the Journal of Neural Engineering. His team worked with scientists from Wake Forest University in the study, building on recent advances in our understanding of the brain area known as the hippocampus and its role in learning.

In the experiment, the researchers had rats learn a task, pressing one lever rather than another to receive a reward. Using embedded electrical probes, the experimental research team, led by Sam A. Deadwyler of the Wake Forest Department of Physiology and Pharmacology, recorded changes in the rat’s brain activity between the two major internal divisions of the hippocampus, known as subregions CA3 and CA1. During the learning process, the hippocampus converts short-term memory into long-term memory, the researchers prior work has shown.

“No hippocampus,” says Berger, “no long-term memory, but still short-term memory.” CA3 and CA1 interact to create long-term memory, prior research has shown.

In a dramatic demonstration, the experimenters blocked the normal neural interactions between the two areas using pharmacological agents. The previously trained rats then no longer displayed the long-term learned behavior.

“The rats still showed that they knew ‘when you press left first, then press right next time, and vice-versa,’” Berger said. “And they still knew in general to press levers for water, but they could only remember whether they had pressed left or right for 5-10 seconds.”

Using a model created by the prosthetics research team led by Berger, the teams then went further and developed an artificial hippocampal system that could duplicate the pattern of interaction between CA3-CA1 interactions.

Long-term memory capability returned to the pharmacologically blocked rats when the team activated the electronic device programmed to duplicate the memory-encoding function.

In addition, the researchers went on to show that if a prosthetic device and its associated electrodes were implanted in animals with a normal, functioning hippocampus, the device could actually strengthen the memory being generated internally in the brain and enhance the memory capability of normal rats.

“These integrated experimental modeling studies show for the first time that with sufficient information about the neural coding of memories, a neural prosthesis capable of real-time identification and manipulation of the encoding process can restore and even enhance cognitive mnemonic processes,” says the paper.

Next steps, according to Berger and Deadwyler, will be attempts to duplicate the rat results in primates (monkeys), with the aim of eventually creating prostheses that might help the human victims of Alzheimer’s disease, stroke or injury recover function.

The paper is entitled “A Cortical Neural Prosthesis for Restoring and Enhancing Memory.” Besides Deadwyler and Berger, the other authors are, from USC, BME Professor Vasilis Z. Marmarelis and Research Assistant Professor Dong Song, and from Wake Forest, Associate Professor Robert E. Hampson and Post-Doctoral Fellow Anushka Goonawardena.

Berger, who holds the David Packard Chair in Engineering, is the Director of the USC Center for Neural Engineering, Associate Director of the National Science Foundation Biomimetic MicroElectronic Systems Engineering Research Center, and a Fellow of the IEEE, the AAAS, and the AIMBE.

From Thursday 16 June through July 16, this paper can be downloaded from http://iopscience.iop.org/1741-2552/8/4/046017

From: USC Viterbi

Lowercase postgresql columns names

Returns one record per ddl:

SELECT  ’ALTER TABLE ‘ || quote_ident(c.table_schema) || ‘.’  || quote_ident(c.table_name) || ‘ RENAME “‘ || c.column_name || ‘” TO ‘ || quote_ident(lower(c.column_name)) || ‘;’ As ddlsql  FROM information_schema.columns As c  WHERE c.table_schema NOT IN(‘information_schema’, ‘pg_catalog’)       AND c.column_name <> lower(c.column_name)   ORDER BY c.table_schema, c.table_name, c.column_name;

by postgresonline

 

U.N. Report Declares Internet Access a Human Right

A United Nations report said Friday that disconnecting people from the internet is a human rights violation and against international law.

The report railed against France and the United Kingdom, which have passed laws to remove accused copyright scofflaws from the internet. It also protested blocking internet access to quell political unrest(.pdf).

While blocking and filtering measures deny users access to specific content on the Internet, states have also taken measures to cut off access to the Internet entirely. The Special Rapporteur considers cutting off users from internet access, regardless of the justification provided, including on the grounds of violating intellectual property rights law, to be disproportionate and thus a violation of article 19, paragraph 3, of the International Covenant on Civil and Political Rights.

The report continues:

The Special Rapporteur calls upon all states to ensure that Internet access is maintained at all times, including during times of political unrest. In particular, the Special Rapporteur urges States to repeal or amend existing intellectual copyright laws which permit users to be disconnected from Internet access, and to refrain from adopting such laws.

The report, by the United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, comes the same day an internet-monitoring firm detected that two thirds of Syria’s internet access has abruptly gone dark, in what is likely a government response to unrest in that country.

By David Kravets from Wired

Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

 

 

20 Things I Learned about Browsers and the Web – By Min Li Chan, Google Chrome Team

Late last year, we released an illustrated online guidebook for everyday users who are curious about how browsers and the web work. In building 20 Things I Learned about Browsers and the Web with HTML5, JavaScript and CSS with our friends at Fi, we heard from many of you that you’d like to get your hands on the source code. Today, we’re open sourcing all the code for this web book athttp://code.google.com/p/20thingsilearned, so that you can use and tinker with the code for your own projects.

 

20 Things I Learned was celebrated this year as an Official Honoree at the 15th Annual Webby Awards in the categories of EducationBest Visual Design (Function), and Best Practices. For those of you who missed our initial release last year, here’s a quick recap of the APIs behind some of the web book’s popular features:

  • The book uses the HTML5 canvas element to animate some of the illustrations in the book and enhance the experience with transitions between the hard cover and soft pages of the book. The page flips, including all shadows and highlights, are generated procedurally through JavaScript and drawn on canvas. You can read more about the page flips on this HTML5rocks tutorial.
  • The book takes advantage of the Application Cache API so that is can be read offline after a user’s first visit.
  • With the Local Storage API, readers can resume reading where they left off.
  • The History API provides a clutter-free URL structure that can be indexed by search engines.
  • CSS3 features such as web fonts, animations, gradients and shadows are used to enhance the visual appeal of the app.

 

With this open source release, we’ve also taken the opportunity to translate 20 Things I Learned into 15 languages: Bahasa Indonesia, Brazilian Portuguese, Chinese (Simplified and Traditional), Czech, Dutch, English, French, German, Italian, Japanese, Polish, Russian, Spanish, and Tagalog.

We hope that web books like 20 Things I Learned continue to inspire web developers to find compelling ways to bring the power of open web technologies to education. 20 Things I Learned is best experienced in Chrome or any up-to-date, HTML5-compliant modern browser. For those of you who’ve previously read this web book, don’t forget to hit refresh on your browser to see the new language options.

Min Li Chan is a Product Marketing Manager on the Google Chrome Team and the project curator/author for 20 Things I Learned about Browsers and the Web.

From Googlecode

 

The Web That Wasn’t – Google Tech talk

http://www.youtube.com/watch?v=72nfrhXroo8

Google Tech talk

ABSTRACT

For most of us who work on the Internet, the Web is all we have ever really known. It’s almost impossible to imagine a world without browsers, URLs and HTTP. But in the years leading up to Tim Berners-Lee’s world-changing invention, a few visionary information scientists were exploring alternative systems that often bore little resemblance to the Web as we know it today. In this presentation, author and information architect Alex Wright will explore the heritage of these almost-forgotten systems in search of promising ideas left by the historical wayside.

The presentation will focus on the pioneering work of Paul Otlet, Vannevar Bush, and Doug Engelbart, forebears of the 1960s and 1970s like Ted Nelson, Andries van Dam, and the Xerox PARC team, and more recent forays like Brown’s Intermedia system. We’ll trace the heritage of these systems and the solutions they suggest to present day Web quandaries, in hopes of finding clues to the future in the recent technological past.

Speaker: Alex Wright
Alex Wright is an information architect at the New York Times and the author of Glut: Mastering Information Through the Ages. Previously, Alex has led projects for The Long Now Foundation, California Digital Library, Harvard University, IBM, Microsoft, Rollyo and Sun Microsystems, among others. He maintains a personal Web site at http://www.alexwright.org/

Trip to Fernando de Noronha

I´ll enjoy my sabbatical license (5 days) in Fernando de Noronha!!
Fernando de Noronha is an archipelago of 21 islands and islets in the Atlantic Ocean, 354 km (220 miles) offshore from the Brazilian coast. The main island has an area of 18.4 square kilometres (7.1 sq mi) and had a population of 3,012 in the year 2010. The area is a special municipality (distrito estadual) of the Brazilian state of Pernambuco and is also a UNESCO World Heritage Site. Its timezone is UTC−2 hours. The local population and travellers can get to Noronha by plane or cruise from Recife (545 km) or by plane from Natal (360 km). We´re going through Recife. A environmental preservation fee is charged from tourists upon arrival by Ibama (Institute of Environment and Renewable Natural Resources).

From wikipedia
More info: Noronha´s Government Site

View Larger Map

ISACA and EC-Council Sign a formal memorandum of understanding (MOU) to Advance the Information Security Profession

ISACA and the EC-Council have signed a formal memorandum of understanding (MOU) that enables the organizations to share knowledge and collaborate to advance the global information security profession.

ISACA is a global association of more than 95,000 IT security, assurance and governance professionals. It was founded in 1969 and established the CISA, CISM, CGEIT and CRISC certifications. EC-Council is the provider of various technical security certifications, including the renowned Certified Ethical Hacker (CEH) certification, and since 2003, has trained more than 90,000 security professionals and certified over 40,000 members across 84 countries.

As part of the MOU, ISACA and the EC-Council will collaborate on select training and educational programs for the benefit of attendees. Additionally, EC-Council will continue to exempt Certified Information Systems Auditors (CISAs) from the Certified Ethical Hacker (CEH) exam and allow them to directly take the Certified Security Analyst exam to earn the Licensed Penetration Tester (LPT) certification. ISACA will provide continuing professional education (CPE) credit for ISACA certification holders who take and pass the CEH exam.

“Both ISACA and EC-Council are committed to providing value to information security professionals, and this agreement enables us to maximize the activities both organizations provide for the benefit of our constituents,” said Emil D’Angelo, CISA, CISM, international president of ISACA. We are pleased to recognize our collaboration through this MOU and look forward to continuing our work together.”

“This collaboration will further enhance both our organizations’ commitment to the betterment of the IT security community,” says Mr. Jay Bavisi, president of EC-Council.  He adds, “This partnership between EC-Council and ISACA, a global organization like ours, can only bode well for the industry. Through this agreement, we will be able to create a strong, forward-thinking and vibrant pool of professionals who can better contribute to the information security community.”

About ISACA

With 95,000 constituents in 160 countries, ISACA (www.isaca.org) is a leading global provider of knowledge, certifications, community, advocacy and education on information systems (IS) assurance and security, enterprise governance and management of IT, and IT-related risk and compliance. Founded in 1969, the nonprofit, independent ISACA hosts international conferences, publishes the ISACA Journal, and develops international IS auditing and control standards, which help its constituents ensure trust in, and value from, information systems. It also advances and attests IT skills and knowledge through the globally respected Certified Information Systems Auditor (CISA), Certified Information Security Manager (CISM), Certified in the Governance of Enterprise IT (CGEIT) and Certified in Risk and Information Systems Control™ (CRISC™) designations. ISACA continually updates COBIT, which helps IT professionals and enterprise leaders fulfill their IT governance and management responsibilities, particularly in the areas of assurance, security, risk and control, and deliver value to the business.

Follow ISACA on Twitter: http://twitter.com/ISACANews.

Join ISACA on LinkedIn: ISACA (Official)

About EC-Council

The International Council of E-Commerce Consultants (EC-Council) is a member-based organization that certifies individuals in cybersecurity and e-commerce. It is the owner and developer of 16 security certifications, including Certified Ethical Hacker (CEH®), Computer Hacking Forensics Investigator (CHFI®) and EC-Council Certified Security Analyst (ECSA®)/License Penetration Tester (LPT®). Its certificate programs are offered in over 84 countries around the world.

EC-Council has trained over 90,000 individuals and certified more than 40,000 members, through more than 450 training partners across 84 countries. These certifications are recognized worldwide and have received endorsements from various government agencies including the U.S. federal government via the Montgomery GI Bill, Department of Defense via DoD 8570.01-M, National Security Agency (NSA) and the Committee on National Security Systems (CNSS). EC-Council also operates EC-Council University and the global series of Hacker Halted and TakeDownCon security conferences. The global organization is headquartered in Albuquerque, New Mexico. More information about EC-Council is available at www.eccouncil.org.

Contact:

ISACA: Kristen Kessinger, +1.847.660.5512, kkessinger@isaca.org

EC-Council: Leonard Chin, +1.505.341.3228, leonard@eccouncil.org

From isaca.org

Website Security for Webmasters

Users are taught to protect themselves from malicious programs by installing sophisticated antivirus software, but they often also entrust their private information to various websites. As a result, webmasters have a dual task to protect both their website itself and the user data that they receive.

Over the years companies and webmasters have learned—often the hard way—that web application security is not a joke; we’ve seen user passwords leaked due to SQL injectionattacks, cookies stolen with XSS, and websites taken over by hackers due to negligent input validation.

Today we’ll show you some examples of how a web application can be exploited so you can learn from them; for this we’ll use Gruyere, an intentionally vulnerable application we use for security training internally, and that we introduced here last year. Do not probe others’ websites for vulnerabilities without permission as it may be perceived as hacking; but you’re welcome—nay, encouraged—to run tests on Gruyere.

Client state manipulation – What will happen if I alter the URL?Let’s say you have an image hosting site and you’re using a PHP script to display the images users have uploaded:

http://www.example.com/showimage.php?imgloc=/garyillyes/kitten.jpg

So what will the application do if I alter the URL to something like this and userpasswords.txt is an actual file?

http://www.example.com/showimage.php?imgloc=/../../userpasswords.txt

Will I get the content of userpasswords.txt?

Another example of client state manipulation is when form fields are not validated. For instance, let’s say you have this form:

It seems that the username of the submitter is stored in a hidden input field. Well, that’s great! Does that mean that if I change the value of that field to another username, I can submit the form as that user? It may very well happen; the user input is apparently not authenticated with, for example, a token which can be verified on the server.
Imagine the situation if that form were part of your shopping cart and I modified the price of a $1000 item to $1, and then placed the order.

Protecting your application against this kind of attack is not easy; take a look at the third part of Gruyere to learn a few tips about how to defend your app.

Cross-site scripting (XSS) – User input can’t be trusted

A simple, harmless URL:

http://google-gruyere.appspot.com/611788451095/%3Cscript%3Ealert(’0wn3d’)%3C/script%3E

But is it truly harmless? If I decode the percent-encoded characters, I get:

<script>alert('0wn3d')</script>

Gruyere, just like many sites with custom error pages, is designed to include the path component in the HTML page. This can introduce security bugs, like XSS, as it introduces user input directly into the rendered HTML page of the web application. You might say, “It’s just an alert box, so what?” The thing is, if I can inject an alert box, I can most likely inject something else, too, and maybe steal your cookies which I could use to sign in to your site as you.

Another example is when the stored user input isn’t sanitized. Let’s say I write a comment on your blog; the comment is simple:

<a href=”javascript:alert(‘0wn3d’)”>Click here to see a kitten</a>

If other users click on my innocent link, I have their cookies:

You can learn how to find XSS vulnerabilities in your own web app and how to fix them in the second part of Gruyere; or, if you’re an advanced developer, take a look at the automatic escaping features in template systems we blogged about previously on this blog.

Cross-site request forgery (XSRF) – Should I trust requests from evil.com?

Oops, a broken picture. It can’t be dangerous–it’s broken, after all–which means that the URL of the image returns a 404 or it’s just malformed. Is that true in all of the cases?

No, it’s not! You can specify any URL as an image source, regardless of its content type. It can be an HTML page, a JavaScript file, or some other potentially malicious resource. In this case the image source was a simple page’s URL:

That page will only work if I’m logged in and I have some cookies set. Since I was actually logged in to the application, when the browser tried to fetch the image by accessing the image source URL, it also deleted my first snippet. This doesn’t sound particularly dangerous, but if I’m a bit familiar with the app, I could also invoke a URL which deletes a user’s profile or lets admins grant permissions for other users.

To protect your app against XSRF you should not allow state changing actions to be called via GET; the POST method was invented for this kind of state-changing request. This change alone may have mitigated the above attack, but usually it’s not enough and you need to include an unpredictable value in all state changing requests to prevent XSRF. Please head to Gruyere if you want to learn more about XSRF.

Cross-site script inclusion (XSSI) – All your script are belong to us

Many sites today can dynamically update a page’s content via asynchronous JavaScript requests that return JSON data. Sometimes, JSON can contain sensitive data, and if the correct precautions are not in place, it may be possible for an attacker to steal this sensitive information.

Let’s imagine the following scenario: I have created a standard HTML page and send you the link; since you trust me, you visit the link I sent you. The page contains only a few lines:

<script>function _feed(s) {alert("Your private snippet is: " + s['private_snippet']);}</script>
<script src="http://google-gruyere.appspot.com/611788451095/feed.gtl"></script>

Since you’re signed in to Gruyere and you have a private snippet, you’ll see an alert box on my page informing you about the contents of your snippet. As always, if I managed to fire up an alert box, I can do whatever else I want; in this case it was a simple snippet, but it could have been your biggest secret, too.

It’s not too hard to defend your app against XSSI, but it still requires careful thinking. You can use tokens as explained in the XSRF section, set your script to answer only POST requests, or simply start the JSON response with ‘\n’ to make sure the script is not executable.

SQL Injection – Still think user input is safe?

What will happen if I try to sign in to your app with a username like

JohnDoe’; DROP TABLE members;--

While this specific example won’t expose user data, it can cause great headaches because it has the potential to completely remove the SQL table where your app stores information about members.

Generally, you can protect your app from SQL injection with proactive thinking and input validation. First, are you sure the SQL user needs to have permission to execute “DROP TABLE members”? Wouldn’t it be enough to grant only SELECT rights? By setting the SQL user’s permissions carefully, you can avoid painful experiences and lots of troubles. You might also want to configure error reporting in such way that the database and its tables’ names aren’t exposed in the case of a failed query.
Second, as we learned in the XSS case, never trust user input: what looks like a login form to you, looks like a potential doorway to an attacker. Always sanitize and quotesafe the input that will be stored in a database, and whenever possible make use of statements generally referred to as prepared or parametrized statements available in most database programming interfaces.

Knowing how web applications can be exploited is the first step in understanding how to defend them. In light of this, we encourage you to take the Gruyere course, take other web security courses from the Google Code University and check out skipfish if you’re looking for an automated web application security testing tool. If you have more questions please post them in our Webmaster Help Forum.

Written by Gary Illyes, Webmaster Trends Analyst from Google Online Security blog

Computer scientist annotations