Saturday, October 5, 2024

Low Code, No Code and the social observation

For many organizations, the organization is run by business graduates, maybe, except few tech. companies.

There is no right or wrong, concerning an organization's hiring practice. On the other hand, common practices do reflect its social impacts.

In general, as analysing the American Community Survey show, business degree, usually, paid well comparing to other degrees. This, in general, matches people's common sense. The only other study fields that is compatible in earning are the STEM fields. On the other hand, the study has shown that, unless STEM graduates become a manager, their earnings will endup lower than those of business graduates.

With higher earnings and ranks in organizations, many business graduates, except few, are likely to hold high estims of themselves and considered themselves are bright and can make great decision on, basically, everything. Many believe they can make no wrong and understand the world better and, even believe, business graduates are capable of learning and doing anything.

The low code, no code approaches to programming, in a way, flatering into the mind of business graduates.

The low code, no code approaches, in general, requires low entrance learning curves. In general, people with basic logic mind can create scripts to help with business processes. Examples of low code, no code products range from Microsoft Excel, Word, Access ...etc, the office production suite. All these products enhanced business graduates ego. With the basic logic mind, which most business graduates posses, they have managed to automate some business process and provided visible result. This helps them to show that they have achieved result with minimum efforts. Of cause, one thing, many of them have ignored is who have created those tools and, forgot to ask themselves, wouldn't those people can do what they have achieved in effortless moves.

The self-esteems of business mangeres can push the direction of the organization so high as to dictate programmer or, to bigger extend, the IT direction of an organization while lacking the knowledge of how programmers can were able to create those tools they custom to and used. In their mind, these tools are all IT or programmer need to create the great products for the worlds while forogten that they have never able to create those tools with their tools and scripting skills.

The new tooks that coming this way is the Power App. At the first look, it may have the look of the low learning curves, just like what Excel, Access possessed. Looking at salesman's demonstration, it can really catch business graduates' eyes. On the other hand, people kept forgot the old saying: no pain, no gain. There is always a cost of gain. When you adopt a tool, you are limited by the tool. From the beginning, Power App may looks powerful and capable. On the other hand, it takes away many features that needed to make great software, which business graduates have never experienced and know about. Asking programmer and IT to adopt the Power App, it basically strip the IT and programmers the access to those great features of traditional programming. With time, Power App will continue to adding those knowing programming features into the Power App. This, however, will alos adding complexities to the Power App and the low learning curve will there no more.

Here is a summary of the article. With some logic mind, many can write scripts. On ther other hand, not many people with logic mind can become Mathemtician - Mathematics is pure logic. So, give Mathemtician a round of applauses and let's be humble, listen to experts and don't force your ego on them - they have studied way more to become experts.

Thursday, November 11, 2021

Firefox - set new tab to Google on Linux Mint

Firefox was one of the connerstond of the Open Source movement. It has a very positive impact on the web browser world and, hence, our life in general.

The fact that Firefox earned the market share from Microsoft has forced the Microsoft to change its behavior toward the web standards and taking a much cooperative behavior. Because that shift of behaviors, the web standards was able to move forward in a more consistant way and that benefits the general public a lot. Just imaging a world where you will need to track which website needs which browser. Firefox has continued to be one of my favorite even though for some other reasons, I have used Firefox less for a while - still like it a lot with all of its extensions.

Recently, I converted one of my old computer to Linux Mint and, with no surprise, the machine come with Firefox as the default browser. I was happy to get back to use the Firefox as my browser.

Unfortunately, when I begin to customize the browser to my taste, I notice that I have trouble to set the default website when I open a new browser tab. You got two choice there - a Firefox screen, or a blank screen. Even though, recently, I have second thoughts on using Google as my primary search engine, I was trying to start with it as the default page for the new browser tab.

At that point, I was quite disapointed: In my mind, Firefox is an Open Source crown and I expected it to keep an open mind or the freedom spirit, which, to me, means to give users more freedom to control the settings.

Not surprisingly, I went to Google for help. With that, I found articles that basically pointed to a Firefox extension that can help user set the new tab to a different website. In the meantime, I also look up for other browsers offered by the Linux community. I end up installed the Chromium. The Chromium isn't really a Google Chrome but is the Open Source code Google contributed to the Open Source community from its Chrome project.

Not wanting to give up on Firefox, I played with the Firefox a bit more and found that Firefox has gone to the extend that Google is not listed in the search engine by default. You will need to go through the process of 'Find more search engines' on its setting pages to finally add Google as one of the search engine.

    Basically, you go to Settings, select Search from the sidebar,
      scroll down to 'Find more search engines' and this direct you
      to a Linux Mint page. Scroll all the way down and follow the
      instruction to, finally, add the Google as a search engine.

    Once you got the Google search engine added, you can set it as
      the default.
  
    Back to the new tab setting for the Firefox, you can use Firefox
      default page, checked off all extra, and save.

    Now, after you open the new tab, you still see Firefox logos.
      But, at least, when you search, it is a Google search.

Throgh the above process, I learned that the 'Open Source' ironic is the motivation. However, I feel it probably went too far. I understand that we like to remind users that use the Open Source software the importance of the Open Source. But I really wondering if the current practice doing a dis-service to that goal. As you can see, people will try their way to get what they want - like install the extension or install a different browser. With all these extra works, I wonder how many will stay with Firefox and allows Firefox to, again, be the favorite browser.

Thursday, September 3, 2020

Contemporary State Of App. Development and REST API

As contemporary application developers and architects, we all feel overwhelmed with all the new 'stuffs' that are throwing at us - the new frameworks, the new 'standards', the new 'thoughts', and the new 'ways' of doing things.

However, application developers and architects are not in the fashion business, we do not, and, should not, blindly adopt all the new stuffs. The logical approach is what we should do.

For example, there was this new XHTML 'standard', and, there were many books and hypes following it. However, in the end, the XHTML is pretty much dead. The lesson learned is that new 'stuffs' must pass practical tests. Just be new and labeled 'standard' does not cut it.

In the world of application development, there are many 'standards' that are basically doing the same thing. For example, the JSON and YAML are basically doing the same thing. Adopting one or the other would not really a matter of live or death. As to the case of XHTML and HTML5, the situation is a bit different. The limitation and feature matters - if you follow one, you may not get the feature or benefit of the other. For both cases above, these are physical limitations. Once you adopted, you take what is come with it.

New thoughts or practices, on the other hand, are not one or nothing. People do not have to adopt the whole thing to take advantage of them. The RESTful API, as was defined by Roy Fielding in his 2000 PhD dissertation is a good example.

The trend of thoughts of the RESTful API is heavily trending on the HTTP protocol. The 'standard/practices' try to use/adopt/fit the use of HTTP's features, where the idea of stateless is not necessary new, but is an emphasized feature. The idea of API or 'Client-Server' architect isn't new either. It actually existed for a long time. Even the use of plain text as the communication medium is an old practice. However, the increasing processing power and network bandwidth made the practice much feasible.

The RESTful API had been new and trended for a while. However, as of this writing, along with my previous article REST or Representational state transfer, we have seen deviant from the practice of the REST API. For example, the Graph API and GraphQL API.

Personally, from the philosophic point of views, I really am questioning REST's strong adoption of the HTTP protocol. For one, the HTTP was not designed to facilitate the 'Client-Serve' application architect to begin with. Second, the 'cast' to follow GET/PUT/POST/DELETE keyword's original purpose is just so artificial and does not gear toward improving 'Client-Server' architecture. The use of GET with query string is an obvious example - see the next paragraph.

As of today, the object oriented programming is largely the accepted best practice. The basic practice of object oriented programming is to group related data into an object instead of treat them as individual data items. Of course, for complex objects, objects can be nested, and the data are, therefore, nested too. The HTTP GET query string, by its design, there is no nested or structured data. By requiring transferring data with query string, the sending and receiving ends are forced to unpack and repack the data to facilitate the object oriented practice, while the POST keyword can easily send structured objects via JSON or other means.

As mentioned earlier, the REST API is a practice, and there really isn't the need to adopt it in full in order to take the advantage of it.

The doctrine of using GET/POST/PUT/DELETE keywords also does not make sense in that it limited the possible operations to create, read, update and delete - aka CRUD. However, as we all know, there are many operations that are required to build an effective Client-Server applications. With REST API, it is a common practice to pass parameters/flags to fine tune the desired operations and the adoption of the doctrine of GET/POST/PUT/DELETE forced the application to check for desired operations in two places: once in the HTTP keyword (even though this may be handled by, say, the .NET web API framework) and once in the parameters/flags passed in. This may not be a big deal. But the question is why, when we can pack all the info in just one parameter/place. Also please note that the end-point syntax standardization is not tied to the doctrine of GET/POST/PUT/DELETE and you can simply use the POST keyword while keeping the same end-point syntax.

The design of the end-point also worth noting. I have saw very typical design that mimic the design of database table. Personally, I have strong doubts about this approach even though I understand the influences from the RESTful doctrine. This approach, in essence, is asking the client application to perform the work of a database. It is true, this approach will guarantee the accessibility of all data. However, from application's point of view, this is never the aim of an application architect. The aim of an architect can be the decoupling, the layered design, shield higher layer from the complexity of the lower layer ... etc. But, not the access to the every single detail of the lower layer. This approach of asking clients to know and handle the database table directly, actually, violates many application design's best practice. For one, you are not decoupling the design and did not isolate the higher layer from the lower detail unless you build all these layers into the client application. Let's forget about the performance for a moment. Just think about what if you decide to change the database table design. Without changing the end-point design, do you still guaranteed the access to all the details? Also, obviously, the black box approach of an object oriented design is violated. The design also force the API consumer to learn all the relations between entities/objects. Also, the performance is going to take a hit. Just imaging if you need to join two tables. All the records will have to be read from the server and check for the matches before presenting the matched info to the user.

People working in the IT field are constantly overwhelmed by new stuffs and it is my hope that, with this article, people that working in the IT field will actually look at the new stuffs and make logical decisions.

Saturday, January 5, 2019

An organizing of asynchronous code in javascript / c# ...

Begin

Asynchronous codes in program aren't uncommon. It, however, from time to time, cause confusions in tracking and debugging codes that surrounding them. For example, programmer may forgot that an asynchronous task have been kicked off and that certain operations or functions would not and could not be performed until the task is done. To help with this end, there is this idea of keeping the kick off and call back functions close to each other. An example of this will be the Microsoft C# async and await keyword pair. Javascript mostly using the anonymous function to keep the kick off and the call back function closer.

An scenario I run into is that a single button click triggers several asynchronous tasks and the logic is required to monitor the result of each asynchronous task in order to react. As we can see, in this case, even though keeping call backs with triggers is good, keeping all call backs close to each other is also beneficial.

My solution to the situation is to pass in a gathering function to each trigger/call-back pair and has each call-back eventually call the gathering function. In this way, with logic in the gathering function, we can monitor all the result in one central place and helps debugging codes too. If desired, it is also possible to move all call back logic in the gathering function for centralized logic checking and debugging.

End

Thursday, January 3, 2019

Construct Javascript JQuery HTML OOP re-usable controls

Begin

The goal of this article is to outline few thoughts on creating re-usable Javascript/JQuery and HTML based web controls.

Through my recent NodeJs development, I noticed my needs of creating similar web/HTML control in various part of the project. As seasoned programmers would do, I began to think about re-usability of the code and also the OOP approach.

As Javascript/HTML programmer will know, the OOP isn't an automatic thing in Javascript/HTML development. After few days of pondering and trying, I was able to come up with a scheme and that is what this article is all about.

First of all, let me clarify what I mean by a 'control'. When I said a control, what I really mean is a group of basic/HTML control been considered as a basic operation unit that can be deployed or re-used in various parts of projects - in a way, it is actually a 'composite control'. For example, you can consider a set of html textbox element (i.e. ) arranged in a html table element as a spreadsheet control. You may even include few buttons as part of that control. You can then use it in various parts of projects.

In my case, one of the control looks like this:
As shown in the screenshot, my control consists of quite few html control/elements and they interacted with each other as a unit.

With Javascript in mind, obviously, we would like to capture event handling and needed data in a Javascript 'class/object'. This can be achieved by using the function and prototype construct of the Javascript language:
  cClass = function (Data) {
    this.Data = Data;
  }
  cClass.prototype.Function_1 = function (argument, ...) {
    ...
  }



The functions would include all the event handling functions and other functions needed for operation of the control. You may also include a function that construct all the html elements.

A control being a control is that you can use it in various places in your projects. A very important requirement for being able to do that is being able to assigned an ID to each instance of the control. Programming code can, then, use the ID to perform tasks on the correct instance of a control.

For html controls, they can all have an ID attributes. It is, therefore, possible to assign different IDs for each instance of the control and this can be done through the function that construct the html elements.

A better approach, however, in my opinion, is to wrap each instance in a div element with instance ID. With this approach, you would have the option of building the html manually by copying html elements into an instance div element instead of relying on a function to build the html elements.

Once controls are in place, program codes need to have ways to access html elements inside an instance. This is where the JQuery come into play. By using the syntax:
  $('#InstanceID #ElementID')
the program code can access elements inside an instance.

There are few other details to be observed while making the code work. But, in essence, this basic structure should provide you a re-usable control.

End

Tuesday, September 18, 2018

Higher Education IPEDS College Data UI and RESTful API - defnition, charting, demonstration


Updated 04/21/2019:The demonstration of the project can now be found at:here.

* 11/12/2018: Imported newest IPEDS data as of Oct. 24, 2018 into the system. An article that demonstrated the use of these new data can be found at EdPond.blogspot.com.

This article describes my desire, my efforts, and my continuous work on an IPEDS College Data User Interface (UI) and Application Programming Interface API). The application consists of the client, through which user interact to and the server, which implemented via a RESTful API.


The IPEDS college data is, arguably, the most important data for United States' colleges. The data is not only for researcher but also for general public to learn about colleges. The data appeared in various reports like the US News College Ranking and the College Scorecard reports.

The IPEDS college data have been made available for many years, but, like many government data reside on the Internet, a user friendly application is largely not exist and this seriously restrict the value of the data since people can't easily arrange the data into a digestible form. It is these inconveniences that made me wanted to build an infrastructure for, say, the government data.


A system was build around the statistical programming language R, which carries a lot of my pioneering ideas. The system, however, can not be easily extended to the modern web based infrastructure - hence, the current new project.

With the desire to build a new system, I initiated the project with the postsecondary higher education IPEDS college data as the focus. To begin, a fund raising attempt was launched on the Kickstarter.com. With the lack of support from IPEDS participating institutions, I take the work to myself.

As with many long standing data collection, IPEDS college data suffers the same kind of problems like definition changes, backward compatibility ... etc. And as known to the IPEDS college data insider, to resolve all issues came with the survey, it can take tremendous amount of efforts. For this project, we are not aiming to resolve all problems but aiming to provide easier access to the data.
 
At the onset of the project, two user groups are of our major concern:
  1.  Students and parents that are looking for colleges.
  2.  The researcher
These two groups will have different scenario for using the college data and likely will require different user interface or application. However, one thing in common, is the need to have all the data in database for easy retrieving.

The first goal of this project is to put the college data through database manipulation. We can then address the accessibility issue. The process of importing college data into the database isn't of no pain since, as we all know, not all data are clean. This process is largely refined, as demonstrated in my first youtube video. The process isn't totally automated, but are good enough for most of the purposes - do prepare to be interrupted when there are problems in the raw data.


For students and parents, our goal is to allow viewing and comparing of multiple colleges to allow making informed decisions.

For researcher, our goal is to provide tools that can help identifying definition changes and help retrieving college data across multiple years.


Once the IPEDS data is processed into the database, it is, then, a matter of how to retrieve the needed information. The design and decision can actually make or break the usefulness of the system.

To entice average user to use the college data system, the interface should provide low learning curve. Even though It is always true that the more you know, the easier you can adapt or make better use of a system. However, with the busy schedule of today's population, lower learning curve is essential for majority of the population.


Before the project started, most of the data accessing logic were tested and implemented in the programming language R with command line console interfaces. The goal of this new project is to provide user with a tested and true graphic user interface for our users.

As the project progressed, video were made to demonstrate the usability. The project won't reach the final stage until the user interface is finalized.

The first report of the progress can be found at UI+API for IPEDS College Data - Definition, Trend, History, - a progress report. The IPEDS college data system demonstrated a search/filter front end that do not require prior knowledge of the IPEDS college data survey. After desired measurement have been selected, the program user interface provide a summary table for user to click and pick allowed refinement for the measurements. The user can then retrieve the college data for viewing as demonstrated in the video, the goal of the video is to show the capability other than the operation as the operation will be refined to provide even better user interface for less tech inclined users.

The second report of the progress can be found at UI+API for IPEDS College Data - Chart, Trend, Definition - a progress report. The college data access system, not only shows the added charting capability but also demonstrates various scenario on how to use the charting capability to exam the college data retrieved. Again, some of the operation will be refined to provide even better user experiences. The chart configuration interface, however, should provide user with a very positive experience in using the college data system. Comparing to the first video, the first video left user with a list of records that have to be processed with other software to get a better sense out of the data. The charting capability of the college data access system removes that needs. The easy of use of the chart configuration table also made the project very user friendly.

Please bear in mind, this is just a seeding project. I have the intention of including more government data based on the similar data processing scheme.



====== vvvvv Scratches vvvvvvv =========

This project is about building the basic infrastructure for general government data with the federal IPEDS college data as the pilot data source.

IPEDS (Integrated Postsecondary Education Data System) survey has been around since 1980's and it collected a lot of data from United States' colleges - public or private. The data is not only interested to researchers but also students and parents.

IPEDS data is made available and is used/reported in many reports - like US News College Ranking and College ScoreCard Report.

Even though the IPEDS data is made available to general public, like most government data reside on the Internet, it usually requires certain level of data processing skills to make good use of it.

The purpose of this project is to reduce the barriers for both the general public and researchers - Please viewing the project video to see what we know is working and how we were able to produce some report from our current system. The goal of this project is to extend the limited data we have and to provide better user interface for our user.

As mentioned in the video, we are targeting two groups of users: The researcher and the general students/parents group.

For students and parents, we will provide tools to allow them to comparing colleges.

For researchers, we would provide tools to help them dealing with multiple year trend data.

This project is a foundation project and it is for the social good. By adding more government data to the system, citizen can get educated and can understand our society better.

Supporting this project basically provide you access to the data and help us to continue funding the development.

Thank for your support.

* All rewards are early-bird 1 year subscription or less. All survey published by IPEDS for recent 10 years are available with summary. Report generated must cite our URL for blame - we make sure our data matches the ultimate source. Personal research can't be cited/used by employer. Personal account results can't be published/shared. Institution account means data at institution level. Sector account means data at the sector level.
Risks and challenges

As mentioned in the project video, for the most part, we have tested our approach with limited data. Through the testing, it demonstrated our ability to solve problem we encountered. Not denying, there will be obstacles, just like we had run into them. Some may be just related to clean-up of data and some may be more technique. But we are confident we can get them resolved. Of cause, we will be needing a dedicate web server to host the data online and to provide data API - one year cost included. If we can get exceeding support, extra money could be used to support the operation.


================
IPEDS (Integrated Postsecondary Education Data System) data is arguably the most important data source for learning about United States' colleges. The data have been made available for years. However, without a user friendly interface/application, the data, like most raw data, is, likely, under-used.

The goal of this project is to build a user-friendly interface/application based on my past experiences in dealing with federal data.

The first video is a buy-in pitch that detailed the vision, the prostpects, and the considerations.

The second, and the following videos are to serve as progress report and, possibly, the tutorial, when the system is made available.

Comments welcome ... SsocialDataCenter at(@) gmail.com

===============
This is a video that is also available at KickStarter project that aims to make IPEDS US college data easier to use and access.

As mentioned in the project, the IPEDS data had been made available for a long time but a user friendly application haven't been readily available. The project is building on a general database scheme that can be extended to other datasets.

For IPEDS, it is intended to provide user an easily accessible online database and a user friendly interface that can retrieve data and general charts.

User support for the project is needed not to just build the system but support the operation cost of keeping the online database on internet servers. Updating the database also cost money so are making improvement to the application.

===============
This video demonstrates an IPEDS access UI. The project is under construction. The project take a very general approach. That means it can easily adopted to other datasets. The project present the data as it is and does not try to make decisions for researchers as how a variable should be interpreted and if it has changed definition through out years. We leave these decisions to data professionals. Aware of this, however, :  Not all information are available via IPEDS data files. When in doubt, IPEDS document should be consulted.

Questions and Comments welcome ... SsocialDataCenter at(@) gmail.com

===============
This video presents a progressive enhancement to a IPEDS access UI project, which takes general approach in data organization and, therefore, can adopt to other datasets easily.

Many of the stated objectives:
    The project present the data as it is and does not try
    to make decisions for researchers as how a variable
    should be interpreted and if it has changed definition
    through out the years. We leave these decisions to data
    professionals.
will be demonstrated through the course of this new video.

Beside using the newly designed charting/ploting capability to demonstrate the project's objectives, the video, itself, features an innovative chart configuration tool through which many usage scenario are possible.

With the chart configuration tool, usage scenario were demonstrated that show user ways in detecting anomalies, checking defintions, and locating source of causes.

Aware of this, however, :  Not all information are available via IPEDS data files. When in doubt, IPEDS document should be consulted.

If you have any question or comment, please feel free to contact me at:  socialdatacenter at(@) gmail.com

========




Wednesday, August 22, 2018

REST or Representational state transfer

After worked on my project using REST frameworks, I realized my implementation of my project could be awkward if I relied heavily on HTTP GET method even though the stateless approach made good sense in the project.

Because of this awkwardness, I begin to look up problems with REST. One of the article I run into is 'RESTful APIs, the big lie'. Few things described in the article resonate with my development experience. After reading comments on the article, it is clear that there are quite few perception/understanding issues about the REST.

To get a bit of clarification I decided to see what Wikipedia have to say about the REST - sorry, I got practical projects to work on and have no intention of spending my time on theoretical debates.

Here's what I get out of the Wikipedia:
  The REST is largely what the Web is today (follow the constrains section) -
    Client-Server       - this is obvious.
    Statelessness
      - Except applications that using server side session storage.
      - Cookie is OK since it is client side
    Cacheability
      - Applications may not always specifically set it. But it is there.
    Layered system
      - HTTP fulfill it.
    Code on demand (optional)
      - basically, the javascript or others in the early days.
    Uniform interface
      Resource identification in requests
        - This call for identifying of resources which the URL basically fulfill it.
      Resource manipulation through representations
        - A working application not rely on server session will meet this constrains
      Self-descriptive messages
        - Wiki quote the example of media type - well, the implications are many
        - media type is just a code - that means all other info/code can be hard-coded.
        - I guess, we can stretch to say some message standard are to be specified.
      Hypermedia as the engine of application state
        - This is like to say a home/root page is desired
        - It serve as the root to discover all other resources.
  Applied to Web services
    As should be clear by now, the REST does not call for HTTP.

Most of today's so called RESTFul API/web-service are based on HTTP, but they don't have to. Also, based on HTTP does not make it RESTFul. Since HTTP is not called for, the use of GET/POST/PUT/DELETE should not be a criteria either - even if it is what most of people trying to do - now it make me wonder what role HTTP played in Roy Fielding's dissertation.

For me, I think I am happy with what I have after reading all these. From my assessment of my own project, I would say that what mine don't have at this point is the following:
  A home page that can reveal/lead to all resources.
  Message standard to be specified/published.
Other than that I think I am fine. Besides, I don't really want to reveal my message standard or provide a home page if I am not interested in making the API public. I may only reveal these to my business partners.