QUOTE(dkk @ Apr 18 2014, 02:10 PM)
Depends on the situation. For quick one-off-use jobs, there's no point spending a couple of days writing code when the same thing could be achieved in half an hour. And especially if doing the processing manually (without writing any code at all) will take more time than "half an hour" and less than "a couple of days".
Imagine if your boss tells you "sort out these files by this criteria", expecting you to take 4 hours. But instead of doing that, you wrote a small script and finished the job in half an hour. Would he be happy? What if you went away for 3 days to write a stable and robust app? Would he wonder why you didn't just do it the 4-hour way? Would he be happy that you gave him this nice app that he can never sell, and nobody else can use, because that task was a one-off thing?
This is what I meant by programs made by sysadmins. Not full programs/apps, not one line bash command lines. But something in between.
QUOTE(malleus @ Apr 18 2014, 08:54 PM)
I'll give you an example on the fallacy of stable and robust vs speed of creation.
Not that stability and robustness is not important, but if you're to look at the example of Twitter.
Originally developed using the Ruby on Rails framework, it allowed them to bring it up and get it out quickly. Then only they look at splitting up the core application into different bits and pieces for scalability and speed after that.
However if they're to plan for all of those scalability and speed stuff right from the start, they may have never been able to make it off the ground. It will simply take too long before they can have something usable and will very likely run out of money which is a very bad thing for a startup too. Actually you can also say that over planning with too much functionality that is confirmed to change multiple during the course of the project is what that causes large scale enterprise projects to have all sorts of horrible delays and go beyond budget in the first place.
Firstly, thanks for the thoughtful answers. But we need to understand clearly the reason why I said the speed of creation must never be more important, as it more likely going to compromise the robustness and stability of an app.
Take my simple example below (is in VB.Net, cuz im a VB guy transitioning slowly to C# ...),
Imagine this is during development phase, and we are designing a function that returns primitive integers, in an array.
First approach is a straight forward design with speed is a factor here.
CODE
Public Class Utility
Public Function GetIntegerArray() As List(Of Integer) ' <- function straightaway provides an array of integer.
' implementations here..
End Function
End Class
Obviously this is a fast way and very likely prefered to many programmers.
Second approach, a more thorough study is involved in the design so that robustness and stability can be achieved here albeit with added complexity and more code to write. Time shouldn't be an issue here for any guy who knows his stuffs though. Writing code is not slow btw.
CODE
Public Class Utility
Implements IEnumerable(Of ObjX) ' <- instead of returning a naked array, we encapsulate the whole array inside this class, for security reasons. And instead of returning Integer, we release an object of type ObjX instead.
Private Function GetEnumerator() As IEnumerator(Of ObjX) Implements IEnumerable(Of ObjX).GetEnumerator
Return New InnerIterator
End Function
Private Class InnerIterator
Implements IEnumerator(Of ObjX)
' implementations here...
End Class
End Class
' this is our custom made variables holder.
Public Class ObjX
Public ReadOnly Property IntValue As Integer
End Class
Although both approaches provdies the same functionality, the second approach is kinda overkill, and most programmers would probably reject.
But then after the app is published, a client suddenly requests for a function to also return a primitive Double, and your boss expects you to provide a solution within 4 hours. Say what again ?
There are only 2 solutions in the first design.
1. Re-write another similar function that returns a double. <- This is gonna make you WET (Write Everything Twice, We Enjoy Typing) and violates the DRY principle (Dont Repeat Yourself).
2. Modify the current function to return a double instead. <- All the consuming code will be affected. Ripple effects. Major headache.
3. Ignore the request because of the 1st and 2nd problems. <- Well, reputation suffers.
...Should take more than 4 hours to fix the problem and in the mean time also skyrocketing the maintenance cost.
For the second design, the solution is a very simple one.
1. Just add a property that returns a double in the wrapper ObjX. Like so :
CODE
Public Class ObjX
Public ReadOnly Property IntValue As Integer
Public ReadOnly Property DblValue As Double <- variable double here.
End Class
...the rest remain unchanged. Very minimum work so shouldn't take 4 hours to fix the problem. And maintenance cost stays low.
So from this example, we can understand the reason why stability and robustness must rule over speed of creation for obvious reason, cost. And without a good amount of planning for scalability any project will peak its evolution prematurely. Also, IMO is bad to consider any work to be a just one off work, everything in a system is related, interconnected and must be documented each line to fully understood so any further adjustment in future is going to be smooth.