Re: NBA Live GameFaceHD Companion App Delayed For Later This Week, No Change in Demo
I work in software development also.
Technically we are both right.
I agree that there are 2 completely different environments for app development:
- Android apps utilize Java based SDKs.
- While, iOS apps utilize Objective-C based SDKs.
However, we don’t know the actual technology EA used for ‘facial recognition’… which was my original point.
Mainly because there are ‘web-based’ facial scanning tools/software available....
As well as ‘native’ facial scanning tools/software on the market also.
And some of these (face scanning) SDKs can be installed on a mobile device using a Code Wrapper:
- They use a Java Wrapper for Android Devices.
- And use a C# Wrapper for iOS Devices.
EA could have built it 'from scratch' on their own... or licensed a 3rd-party app... or used a web-based solution. Who knows?
But if they wanted to use a single codebase to build the app, the key is to centralize as much common business logic (workflows, database storage, network calls, authentication, Account Mgmt stuff, etc.) in a common library and reference it from separate dedicated iOS and Android functions, which activate the mobile device camera. The trick is to keep these “UI layers” as thin as possible and centralize as much code as possible.
So yes, (in reality) EA could build BOTH Native iOS and Android Apps from a single, shared C# codebase, by using tools like:
- Xamarin
- Automagical
- PhoneGap
- Softonic
- Face++
- Or a dozen other tools I didn’t list here.
Don’t get me wrong, I don’t work for EA, so I do not know ‘how’ they built their app, or ‘why’ they built it the way they did… nor what specific technology they use ‘under the hood’… but I do know that it is POSSIBLE to accomplish using a single codebase. That’s all I’m saying.
Especially if they are using ‘cloud-based’ facial scanning software…. there would probably be no need to build a native (Java based) Android app, or a native (Objective-C based) iOS app.
So the only questions are whether EA built it ‘completely native’, or if they used any additional ‘cloud-based’ technology, and what the ‘limitations’ are for doing so… and WHAT ELSE the software does, aside from just scanning faces & uploading the data to EA servers.
But we will find out whenever the app actually drops.
On a Sidenote: here’s some video footage of the GameFaceHD app working on a mobile device back in June, about 1 week after E3 finished:
I work in software development also.
Technically we are both right.
I agree that there are 2 completely different environments for app development:
- Android apps utilize Java based SDKs.
- While, iOS apps utilize Objective-C based SDKs.
However, we don’t know the actual technology EA used for ‘facial recognition’… which was my original point.
Mainly because there are ‘web-based’ facial scanning tools/software available....
As well as ‘native’ facial scanning tools/software on the market also.
And some of these (face scanning) SDKs can be installed on a mobile device using a Code Wrapper:
- They use a Java Wrapper for Android Devices.
- And use a C# Wrapper for iOS Devices.
EA could have built it 'from scratch' on their own... or licensed a 3rd-party app... or used a web-based solution. Who knows?
But if they wanted to use a single codebase to build the app, the key is to centralize as much common business logic (workflows, database storage, network calls, authentication, Account Mgmt stuff, etc.) in a common library and reference it from separate dedicated iOS and Android functions, which activate the mobile device camera. The trick is to keep these “UI layers” as thin as possible and centralize as much code as possible.
So yes, (in reality) EA could build BOTH Native iOS and Android Apps from a single, shared C# codebase, by using tools like:
- Xamarin
- Automagical
- PhoneGap
- Softonic
- Face++
- Or a dozen other tools I didn’t list here.
Don’t get me wrong, I don’t work for EA, so I do not know ‘how’ they built their app, or ‘why’ they built it the way they did… nor what specific technology they use ‘under the hood’… but I do know that it is POSSIBLE to accomplish using a single codebase. That’s all I’m saying.
Especially if they are using ‘cloud-based’ facial scanning software…. there would probably be no need to build a native (Java based) Android app, or a native (Objective-C based) iOS app.
So the only questions are whether EA built it ‘completely native’, or if they used any additional ‘cloud-based’ technology, and what the ‘limitations’ are for doing so… and WHAT ELSE the software does, aside from just scanning faces & uploading the data to EA servers.
But we will find out whenever the app actually drops.
On a Sidenote: here’s some video footage of the GameFaceHD app working on a mobile device back in June, about 1 week after E3 finished:
Comment