Feed on
Posts
Comments

Eclipse is giving this error when attempting to install the Maven SCM handler for Subclipse with svn 1.8 and Subclipse 1.10.x:

Cannot complete the install because one or more required items could not be found. Software being installed: Maven SCM handler for Subclipse 0.13.0.201303011221 (org.sonatype.m2e.subclipse.feature.feature.group 0.13.0.201303011221) Missing requirement: Maven SCM Handler for Subclipse 0.13.0.201303011221 (org.sonatype.m2e.subclipse 0.13.0.201303011221) requires ‘bundle org.tigris.subversion.subclipse.core [1.6.0,1.9.0)’ but it could not be found Cannot satisfy dependency: From: Maven SCM handler for Subclipse 0.13.0.201303011221 (org.sonatype.m2e.subclipse.feature.feature.group 0.13.0.201303011221) To: org.sonatype.m2e.subclipse [0.13.0.201303011221]

Until Issue 1557 is resolved  at Tigris.org, you’ll need to download and install the patched Maven SCM handler from github.

git clone https://github.com/tesla/m2eclipse-subclipse
cd m2eclipse-subclipse
mvn install

 To install the patched SCM handler, add the build directory as an update site, e.g.

file:/Users/lou/m2eclipse-subclipse/org.sonatype.m2e.subclipse.feature/target/site/

 

No Tags

Groovy MagicDraw Element Edit Macro

I use MagicDraw with the SysML plugin for architectural models, including requirements management.

Here is a small groovy script macro that can be customized to edit elements with a bit fewer clicks than the standard interface. It works by opening an editor window for each element selected on the drawing surface.

I’ve installed this macro script to launch with a hot key. When I select an element and press the hot key, the editor pops up.

Thanks to Mindaugas Genutis at MagicDraw support for the fiddly bits that calculate where to put the editor dialog.

import com.nomagic.magicdraw.core.Application
import com.nomagic.magicdraw.automaton.AutomatonMacroAPI
import com.nomagic.magicdraw.openapi.uml.SessionManager
import groovy.swing.SwingBuilder
import com.nomagic.magicdraw.ui.dialogs.MDDialogParentProvider;
import java.awt.BorderLayout
import javax.swing.BoxLayout
import java.awt.Rectangle
import java.awt.Point

try {
	SessionManager.getInstance().createSession("Automaton_Macro_Script_Execute");
	modelData = AutomatonMacroAPI.getModelData();   
	project = Application.getInstance().getProject();
	diagram = project.getActiveDiagram();
	selected = diagram.getSelected();

	selected.each{el ->

		// this bit calculates a screen location for the editor
		Point drawAreaLocation = el.getDiagramPresentationElement().getPanel().getDrawArea().getLocationOnScreen();
		Rectangle symbolBounds = el.getBounds();
		Point guiLocation = new Point((int)(drawAreaLocation.x + symbolBounds.x), (int)(drawAreaLocation.y + symbolBounds.y));

		el.element.appliedStereotypeInstance.each{s ->
			System.out.println(s.name)
			s.classifier.each{cl ->
				System.out.println(cl.name)
				// we are looking for selected SysML requirements
				if(cl.name.equals("Requirement")){
					cl.member.each{mem ->
						System.out.println(mem.humanName)
					}
				}
			}

			// getOpaqueObjectByPath makes it easy to get and set model element attributes 
			def element = el.element.qualifiedName
			def req = AutomatonMacroAPI.getOpaqueObjectByPath(element)

			new SwingBuilder().edt {
			  def f = frame(title:'Frame', size:[300,300], show: true, alwaysOnTop: true, location:guiLocation) {

			    textlabel = label(text:element, constraints: BorderLayout.NORTH)
				def id
				def t
				panel{
					id = textField(text: req.id, columns:16)
					scrollPane(){
				    	t = textArea(text: req.text, lineWrap: true, wrapStyleWord: true, columns:20, rows:10)
					}
				}
				button(text:'Change Text',
					actionPerformed: {
						SessionManager.getInstance().createSession("Change_Requirement_Text");
						req.text = t.text
						req.id = id.text
						SessionManager.getInstance().closeSession();
					}, 
					constraints:BorderLayout.SOUTH)
			  }
			}
		}
	}

} finally {
	SessionManager.getInstance().closeSession();
}

Here’s what this look’s like when I select a requirement and hit the hot key I’ve defined for the macro.

Groovy Element Editor

Technorati , ,

Short and to the point:

apt-get install python-software-properties
add-apt-repository "deb http://archive.canonical.com/ lucid partner"
sudo apt-get update
apt-get install sun-java6-jdk sun-java6-jre
Technorati ,

Silver Bullets

I’m playing with Grails. It’s pretty impressive, although the potential for Hibernate abuse is alarming to me.

This alarm is not so much based on any broken feature of the implementation, but the exchanges I see on forums regarding Hibernate performance and scalability, with typical Hibernate answers like “these are not the droids you’re looking for“ and “move along” as a response.

There’s a lot of vigorous protestations and hand waving in Spring-land regarding potential architectural brokenness stemming from broad statements about Hibernate perfection in all cases, with passing mention of joyful Terracotta as a fix if needed.

The wholesale movement of caching from the database tier to the application tier just looks like pushing the problem around, making scaling more difficult to boot if you care a whit about synchronization. More silver bullets I guess. From a purely operational perspective, you also need to ask where and how you are best equipped to manage caching as an architectural feature.

I suppose as a career move I should be more aligned sweeping “minor” architectural concerns under the rug, no one wants to hear about them. The financial successes of Oracle RAC (distributed lock contention), Veritas Cluster (where’s the quorum device), VMware (works great for every load) and their ilk attest to blissful ignorance being a tack with a more positive pecuniary outcome.

I don’t think I risk being overly negative here, since the preponderance of press is on the side of Hibernate, it’s just that blind acceptance of sweeping claims about the qualities of any product or architectural approach, without a complete understanding of the physics involved, invariably results in surprises with a search for the next fix.

After much too much searching in my opinion, I found relief for my Hibernate concerns in a couple of excellent articles, the most enlightening of which discusses in some detail the relationship of the Hibernate session cache with transaction management. This coupled with a short section of Grails documentation on managing transactions has me much at ease now.

It’s not so much that you should design for the edge case, but that you need to know what the edges are and where the bodies are buried. So, I now embrace Hibernate, very carefully.

No Tags

Ikai Lan’s excellent article “JDBC Connection Pooling for Rails on Glassfish” obsoletes my September 2007 article on this subject.

The importance of disconnecting from the database between queries is elaborated in the comments to the article.

Technorati ,

Red Hat CTO elaborates on lofty ‘cloud’ vision discusses the effort Red Hat is making to engage its customer base on Wall Street:

“In terms of engaging its customer base, for example, Red Hat has worked with Wall Street’s financial institutions to simplify data center operations by capturing a single operating system image, including hardware and software. The payoff: IT has only one image to update and manage, which can be deployed across the network, he said. Currently, Red Hat is testing this technology by burning an OS image onto a USB key and using it to boot up servers, desktops and laptops, he said.”

The thing to note that an effective virtualization strategy does not virtualize the existing, diverse landscape, it redeploys into a highly standardized infrastructure with uniform configurations. Furthermore, the effective process involves discovering the 80% solution, the small set of standard virtualization strategies and standards, that meet the vast majority of workload requirements for the target workloads.

Technorati , , , , , ,

While looking for an OS X alternative to the ESX Virtual Infrastructure CLient for accessing the console of a virtual machine hosted on ESX, I found this article. An example configuration for accessing the console for a client on the ESX server using VNC port 5901 using a password of “secret” would require adding the following entries in the .vmx file for the virtual machine.

RemoteDisplay.vnc.enabled = "true"
RemoteDisplay.vnc.port = "5901"
RemoteDisplay.vnc.password = "secret"

The instructions work for ESX, except for one small detail: the ESX server won’t accept remote VNC connections. There is likely some way to turn this on, but allowing remote vnc connections to a server is typically not allowed and is considered a security hazard.

The solution is too use ssh port forwarding. If the vnc port on the ESX server is 5901, you create a tunnel with the following ssh command:

ssh -L5901:localhost:5901 user@esxhost

where “esxhost” is the hostname of the esx server and “user” is an authenticated user on the esx host.

Once logged in, you can connect to vnc port 5901 using any vnc client on the ssh client machine you have just connected from. The host should be “localhost”, the port should be “1″ and the password should be “secret”. The console for the client on the ESX server should be displayed. My vnc client of choice on OS X is “Chicken of the VNC” but here are many alternative vnc clients for OS X and other operating systems.

Note that ssh is disabled by default for root on ESX server, so if the “user” for ssh access is root, you will either need to enable root access by editing /etc/ssh/sshd_config on the ESX server and set “PermitRootLogin” to “yes”, or use another user authenticated on the ESX console. The later method is more secure since enabling root ssh access to the console is inadvisable for production installations.

Technorati , , , ,

Zero Data Centers

I just stumbled upon a curious headline “Sun Plans To Close Its Data Centers” describing a post by Brian Cinque. Brian’s post was about Sun’s aggressive work within SunIT to reduce data center costs, including power and cooling requirements as well as the continuing vision of utility computing as seen by SunIT. The program has a target of zero Sun data centers by 2015.

The blog post was a healthy perspective on the future of SunIT from the viewpoint of the writer as an architect in SunIT operations, not Sun Microsystems in its entirety. That is, there will surely always be a need for Sun to maintain infrastructure internally to develop the machines and products that Sun sells to its customers to service their businesses. These businesses include Sun’s own business operations though SunIT.

Does SunIT need to own and manage its own data centers for all production systems, all desktop infrastructure and all other Sun business operations? Sun needs this capability no more than any other business for most standard business operations, but Sun certainly does need data center footprint insofar as it assists Sun in developing and perfecting Sun technology for sale.

For example, my home directory is managed on a very busy SunIT managed server running the bleeding edge of Nevada. This is good for Sun’s customers and good for Sun’s strategic ability to understand and develop solutions for our customers. We drink our own kool-aide before we sell it to our cstomers. Does everyone at Sun have their home directory on a server at or close to the tip of Nevada? Certainly not. Most of the Sun home directory infrastructure is on standard Solaris, current generally available release.

Will it always make cost effective sense for SunIT to manage typical, generally available, home directory infrastructure on captive hardware in SunIT managed data centers? Perhaps not. Sun certainly has a right, and an obligation to its stockholders, to leverage the same cost-effective utility and cloud computing models that Sun is helping to develop and foster. Again, to be clear, I’m not referring to SunIT participation in the management and maintenance of bleeding edge Nevada or whatever happens to be the next generation of Sun infrastructure and Sun software. This sort of activity is, and likely always will be, a valuable SunIT contribution to Sun’s ability to serve its customers.

Somehow, the healthy reduction in Sun’s compute footprint, increasing utilization, lowering costs, reducing the number of data centers, including through vendor managed Sun servers, is by some erroneous extension an indictment of Sun products and services. I don’t get that at all. If computing with Sun products and services is simpler, easier, less expensive, and enterprise services and applications can be more rapidly provisioned and are less expensive to manage and operate than the competitive products, won’t the world need a lot more Sun servers? Does it really matter where these servers are housed? Have the laws of supply and demand been repealed? What am I missing?

Somehow it seems that housing and managing servers has a value proposition that can only be met by captive IT operations. I don’t get that either. The truth is we are facing a future where computing will be 25kW a rack or more for fully populated racks, roughly 5 times the current, typical rack power densities and data center power designs. True, these densities come with beneficial orders of magnitude increases in compute capacity, but is a 5kW per rack data center with a few servers per rack sensible? This reality challenges the physics and financial viability of in-house data centers. As computing demands continue to increase, the laws of physics and economics will tend to compel more and more data center consolidation, probably as close to power plants as possible to increase efficiency ad reduce line loss. In the future, power will cost more than the computer hardware, utilization rates will have to improve, and new application deployments will have to take minutes not months. None of this should be news to anyone.

Knowing this, the preponderance of negative and emotional reactions to Bran’s post have all at once puzzled, surprised and alarmed me.

Some of the more telling reactions are “Is this half-baked plan/news something you should post in a public blog. Moron!” and “you’re confusing some people ….” or “Look how foolish parts of Sun’s management behave…” These are fearful reactions. These seem like reactions from people who find their cheese is moving.

If so, I agree, your cheese is moving.

Perhaps the wording in the blog itself, or the preponderance of other trade press and blogs elsewhere regarding industry trends have been unclear, but whether it is 2015, earlier or later by a bit, the world view though the looking glass of captive enterprise data center computing, just like the now defunct view of the world through mainframe glass-house computing, is approaching its end. We will still obviously need general purpose computing infrastructure and data centers somewhere, but what is it about the way we currently provision and manage computing infrastructure that is immutable or unassailable? What is it about the likely end-games and outcomes for current direction of the industry, and trends toward utility and cloud computing, SOA and SaaS that is confusing, hard to understand, inscrutable or surprising?

If all the business and technical mechanisms were in place today to enable businesses to use utility and cloud computing securely, cheaply, quickly and efficiently, eliminating the need to design, build, staff and maintain costly and complicated captive data centers, why would the data center as we know it today exist? What is so insanely great about owning and managing a data center that, in the future, provides no strategic advantage to the business that owns it? Are all of the IT departments, mechanisms, operations and infrastructure patterns of today somehow more entitled to survival than the glass-house operations they replaced? I think not.

Admittedly, all of the mature mechanisms for all enterprise computing to be provisioned external to the enterprise on a compute grid are not yet here. We face plenty of technical and physical challenges, but it seems like, based on the sort of reactions seen to Brian’s post, the political challenges to the changes will be the greatest.

One example given as an objection to “zero Sun data centers” in the blog post reactions was a question concerning how would Sun host it’s own Sun Ray desktop infrastructure on some vendor managed compute cloud or utility facility. The challenges to this are more around data security than around network performance and latency, since much of Sun’s work at home staff already use Sun Rays connected though VPN and high-latency broadband or DSL connections to the Sun network. Also, as a corollary trend, I can assure you that as high-speed wired and wireless networking becomes more ubiquitous, I’ll be eying a Sun Ray laptop. Lugging around gigabytes of disk will become silly, needlessly backbreaking and dangerous. I’m not alone. Several of my colleagues in various Sun groups have been playing with this technology. In any event, my Sun Ray needs an IP address that provides my desktop services, not a data center. How the service is provided and where the service is on the network is irrelevant to the Sun Ray or me as a desktop user.

As the business and technical solutions for data and network security in virtualized, multi-tenant environments mature, nothing stands in the way of hosting Sun Ray sessions on vendor managed infrastructure. In fact, because of Sun’s ruthless elimination of heavy-weight desktop infrastructure in favor of Sun Ray, Sun is in much better shape to leverage and execute on utility or cloud computing for desktops than most businesses.

I’d put a much different spin on the problem of secure cloud computing or, more correctly, the opportunity. What if we could dynamically create networked computer environments that we could share securely with selected customers, vendors and partners? What if we could create virtual data centers, just for individual projects, programs or any business interaction, in a few minutes? The problems going forward revolve around dynamic management of secure, persistent interactions and virtual computing facilities for any imaginable grouping of individuals, groups and businesses. Do you really think this is a pipe dream? How far are the current social networking technologies from this reality?

I imagine the biggest problem people are having visualizing the not-too-distant future state comes from comparisons to the currently prevalent enterprise outsourcing models that are, in my humble opinion, largely a wrong-headed, expensive disasters. Simply turning your data center staff and equipment over to a vendor doesn’t add a lot of value, or at least not very interesting value to me.

A better sense for the future can be gleaned from visualizing a developer writing a Ruby on Rails or PHP application for deployment to a full-service Solaris Zone or “kernel in user space” provider on the Internet or Facebook, Amazon EC2 or some other “social networking” framework. How great is the leap in technology and vendor capability to an enterprise developer deploying an application to an external host for use by an internal department or selected partners? Not very far, I think. In fact, surprise of surprises, it is already happening.

On more than one occasion, my group has “turned up” small but important applications that our customers’ business departments are hosting externally as part of our application inventory efforts supporting consolidation initiatives. How much easier, cheaper and secure does the application outsourcing avenue need to be before this trickle of occasions becomes an avalanche? The IT departments where this has happened are almost always dismayed and surprised, but then again, they maintain inventories of servers, not applications, right? How would they know what is going on with their customers?

The best, albeit weak, argument I’ve heard against this trend is the lack of control and security presented by externally hosted applications that is required by regulatory drivers like Sarbanes-Oxley. This is true, but not compelling, and sounds more like an excuse than an argument. Do you think the right, bright hosting vendor in the not-too-distant future might enable better, more cost-effective, configuration management, security and control than many IT operations?

Here is the bit that surprises me the most about the surprise. There is very little discussion of why this happens and how to provide the types of IT services the business needs to avoid these situations. Is enterprise IT is entitled to provide all computing services for all time solely in its captive or traditional out-source data centers? We live in a competitive, capitalist economy, where innovation rules, and the pieces are coming together for disruptive changes to the way enterprises purchase and manage computing.

To me, the visceral reactions to Brian’s post are far more alarming than the prediction itself. The reactions indicate we have a long way to go in this industry to embrace the inevitable changes that are coming. If you don’t want your cheese to be moved for you, you had better move it yourself. Enterprise IT needs to find more innovative ways to provide secure, rapidly provisioned, highly-available, cost-effective solutions to its business customers before the world overtakes it. Sun is working furiously to support all enterprises in this effort with products and services designed to address these needs. SunIT is working furiously to deploy and use Sun’s products and tools first, both to reduce Sun’s internal IT costs and to support Sun in developing and delivering field-proven products to its external customers.

Perhaps the current trends to virtualization will soften the blow for many, but those of us that don’t make the inevitable shift away from managing servers to managing services may not find their cheese.

Technorati , ,

The typical method employed for migrating existing workloads to ESX from physical machines involves instrumentation of the workload in the current environment, coupled with technical and business analysis to determine:

  • Workload suitability for virtualization
  • ESX resource requirements to support the workload

The technical resource sizing method most often used leverages the VMware Capacity Planner. VMware capacity planner employs an offsite analysis of the workload in a VMware hosted service. This service utilizes collected VMware Capacity Planner metrics for over 20,000 machines, coupled with experience with factors affecting virtualization and ESX resource utilization, to provide a “black box” solution to consolidation target composition.

The solutions provided are generally accurate enough to substantially mitigate risks associated with oversubscription of ESX resources. Typically, organizations are able to migrate with minimal additional empirical workload testing from the physical server to the ESX virtualized environment.

Organizations that cannot use the VMware Capacity Planner facilities for workload analysis, due to policy or technical constraints that disallow the use of the VMware offsite analysis facility, must employ more elaborate and extended empirical testing of workloads to ensure that virtualized workload is suitable for VMware virtualization in the targeted configuration.

“VMware Migration and Consolidation Without the VMware Capacity Planner” discusses the issue and provides an approach for managing migration risks when the Capacity Planner is not an option.

Technorati , , ,

This is a collection of notes and links related to a dual installation of Solaris Nevada and Mac OS using Boot Camp. Although all of the links are helpful, this is a rapidly evolving set of technologies, thus there are some adjustments.

Possibly the most important adjustment I ran into is that the Boot Camp Beta was closed on December 31st, so you are compelled to use Leopard.

As you work through the installation, there are several things you can expect not to work, at least as of build 80 of Nevada.

  • Sound does not work.
  • Dual head configurations may result in irritating color differences between the two screens.
  • Solaris does not support HID 1.1. This bit me with my Microsoft Optical Desktop 4000 mouse. The keyboard works. The mouse doesn’t.
  • The one button trackpad won’t cut it. You’ll need a real wired three-buttoon mouse. No bluetooth. No bluetooth mice. The Wireless Mighty Mouse won’t work.

There are probably other issues, but these are the ones I’ve run into.

There are quite a number of links out there with helpful information, and on the whole, the installation works at least as well as I expected, and wasn’t too difficult. The biggest hassles are the Marvell yukon driver, and the Atheros wireless driver.

I somewhat mitigated the color differences between my laptop and my Apple Cinema display by using a Xinerama installation with different “Gamma” options for each monitor, rather than an Nvidia TwinView configuration. This allowed me to set the gamma separately for each monitor. All my attempts to use the Nvidia configuration mechanisms to deal with this were futile. Here is my /etc/xorg.conf

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    Screen      1  "Screen1" RightOf "Screen0"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
    RgbPath         "/usr/X11/lib/X11/rgb"
    FontPath        "/usr/X11/lib/X11/fonts/misc/:unscaled"
    FontPath        "/usr/X11/lib/X11/fonts/100dpi/:unscaled"
    FontPath        "/usr/X11/lib/X11/fonts/75dpi/:unscaled"
    FontPath        "/usr/X11/lib/X11/fonts/misc/"
    FontPath        "/usr/X11/lib/X11/fonts/Type1/"
    FontPath        "/usr/X11/lib/X11/fonts/100dpi/"
    FontPath        "/usr/X11/lib/X11/fonts/75dpi/"
    FontPath        "/usr/X11/lib/X11/fonts/TrueType/"
    FontPath        "/usr/X11/lib/X11/fonts/Type1/sun/"
    FontPath        "/usr/X11/lib/X11/fonts/F3bitmaps/"
EndSection

Section "Module"
    Load           "dbe"
    Load           "extmod"
    Load           "type1"
    Load           "IA"
    Load           "bitstream"
    Load           "xtsol"
    Load           "glx"
EndSection

Section "ServerFlags"
    Option         "Xinerama" "1"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/mouse"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "keyboard"
EndSection

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Apple Color LCD"
    HorizSync       30.0 - 75.0
    VertRefresh     60.0
    Option         "DPMS"
    Gamma          1.2 1.0 0.85
EndSection

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Apple Cinema HD"
    HorizSync       74.0 - 74.6
    VertRefresh     60.0
    Option         "DPMS"
    Gamma          0.95 1.0 1.0
EndSection

Section "Device"
    Identifier     "Videocard0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce 8600M GT"
    BusID          "PCI:1:0:0"
    Screen          0
EndSection

Section "Device"
    Identifier     "Videocard1"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce 8600M GT"
    BusID          "PCI:1:0:0"
    Screen          1
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Videocard0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "metamodes" "DFP-0: nvidia-auto-select +0+0"
    SubSection     "Display"
        Depth       24
        Modes      "1600x1200" "1280x1024" "1024x768" "800x600" "640x480"
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "Videocard1"
    Monitor        "Monitor1"
    DefaultDepth    24
    Option         "metamodes" "DFP-1: nvidia-auto-select +0+0"
    SubSection     "Display"
        Depth       24
        Modes      "1600x1200" "1280x1024" "1024x768" "800x600" "640x480"
    EndSubSection
EndSection

Technorati , , , , , , , , , ,

- Next »