[wingide-users] Edit-and-continue, what are the alternatives?
tms at stambaugh-inc.com
Thu Sep 22 10:27:12 EDT 2005
> 2. What do other users recommend in such a situation? I know that
> writing unit tests for everything can lower the amount of debugging
> needed for the full app, but then again, with the distributed
> environment I have here, writing a "small" testcase for everything is
> hardly possible.
This is a *hard* problem to solve. It seems to me that the best-of-breed
IDE's -- IBM/Smalltalk with Envy/Developer comes to mind -- did cross the
minimum threshold of usability. They did that, however, at ENORMOUS cost in
an environment (Smalltalk) designed for it. It's one of the reasons why
VisualAge/Java was canned (VAJava had IBM Smalltalk under the hood).
I *love* the concept, and an appealing aspect of python and Wing is the
tempting possibility that this feature may someday be possible, because
python does seem to have the necessary internals to support it. On the other
hand, you seem to be describing a setting where even in the best of cases
this might not be a good idea.
I would encourage you to contemplate finding a way to scale or model your
"distributed environment", including the DB, remote VPN and so on, so that
it is more manageable -- whatever you do with your IDE. For example, during
debug you can usually replace your live DB connection with a dummy test
fixture that simulates the desired behavior (same API but fixed & known good
results). You can play similar games with your VPN connections. The strategy
is to factor your problem into smaller pieces -- using unit tests -- that
you can then reassemble.
Imagine constructing a software PC-board for your application, with a socket
for each module that accepts the module as a plug. What you want to do is
create a unit test and test cases for each socket. Part of that process will
be to create, for each socket, a dummy component that fits into the socket.
With all those dummy components in place, exercise a functional test and
ensure that the overall design still solves the problem. You don't need
VPN's, DB's, and any of that hair for this part of the exercise -- you just
need python wrappers that *simulate* those things. When the overall
application is passing its functional tests, take each dummy -- one at a
time -- and put the actual module in its place. You want the text fixture
(whose socket is the same as the socket in your application) to exercise
just one module at a time. So when you're debugging the DB, you don't worry
about the VPN -- and vice-versa.
When each of the modules pass their unit tests, the assembly as a whole
should pass its functional test -- and if not, either your functional test
or unit tests are broken. Most often, "broken" means you didn't test
something turns out to work differently than you expect.
When the functional test(s) work, then you try doing something real with it.
When it breaks (it always will), its failure will tell you what unit tests
or functional tests you forgot to write. Find a way to make the failure hard
and repeatable, write a functional or unit test that exercises that, and add
it to your test suite. Now repeat all this until it's working to your
A Smalltalk-style edit-and-continue will be really cool, and certainly
speeds up this process. I think, though, that just moving to this kind of
development process will finesse your 30-second startup times enough to get
you moving again even without this enhancement.
More information about the wingide-users